Running Suite: Kubernetes e2e suite =================================== Random Seed: 1636154467 - Will randomize all specs Will run 5770 specs Running in parallel across 10 nodes Nov 5 23:21:09.672: INFO: >>> kubeConfig: /root/.kube/config Nov 5 23:21:09.677: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Nov 5 23:21:09.703: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Nov 5 23:21:09.763: INFO: The status of Pod cmk-init-discover-node1-nnkks is Succeeded, skipping waiting Nov 5 23:21:09.763: INFO: The status of Pod cmk-init-discover-node2-9svdd is Succeeded, skipping waiting Nov 5 23:21:09.763: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Nov 5 23:21:09.763: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Nov 5 23:21:09.763: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Nov 5 23:21:09.774: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Nov 5 23:21:09.774: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Nov 5 23:21:09.774: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) Nov 5 23:21:09.774: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) Nov 5 23:21:09.774: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) Nov 5 23:21:09.774: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) Nov 5 23:21:09.774: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Nov 5 23:21:09.774: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Nov 5 23:21:09.774: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Nov 5 23:21:09.774: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Nov 5 23:21:09.774: INFO: e2e test version: v1.21.5 Nov 5 23:21:09.775: INFO: kube-apiserver version: v1.21.1 Nov 5 23:21:09.775: INFO: >>> kubeConfig: /root/.kube/config Nov 5 23:21:09.780: INFO: Cluster IP family: ipv4 SSSSSSSS ------------------------------ Nov 5 23:21:09.776: INFO: >>> kubeConfig: /root/.kube/config Nov 5 23:21:09.797: INFO: Cluster IP family: ipv4 SSSS ------------------------------ Nov 5 23:21:09.780: INFO: >>> kubeConfig: /root/.kube/config Nov 5 23:21:09.801: INFO: Cluster IP family: ipv4 SSSSS ------------------------------ Nov 5 23:21:09.789: INFO: >>> kubeConfig: /root/.kube/config Nov 5 23:21:09.813: INFO: Cluster IP family: ipv4 S ------------------------------ Nov 5 23:21:09.793: INFO: >>> kubeConfig: /root/.kube/config Nov 5 23:21:09.814: INFO: Cluster IP family: ipv4 SSSSSS ------------------------------ Nov 5 23:21:09.798: INFO: >>> kubeConfig: /root/.kube/config Nov 5 23:21:09.819: INFO: Cluster IP family: ipv4 SSSSSSSSS ------------------------------ Nov 5 23:21:09.804: INFO: >>> kubeConfig: /root/.kube/config Nov 5 23:21:09.826: INFO: Cluster IP family: ipv4 Nov 5 23:21:09.806: INFO: >>> kubeConfig: /root/.kube/config Nov 5 23:21:09.826: INFO: Cluster IP family: ipv4 S ------------------------------ Nov 5 23:21:09.806: INFO: >>> kubeConfig: /root/.kube/config Nov 5 23:21:09.827: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSS ------------------------------ Nov 5 23:21:09.817: INFO: >>> kubeConfig: /root/.kube/config Nov 5 23:21:09.838: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:21:09.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob W1105 23:21:09.910491 30 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 5 23:21:09.910: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 5 23:21:09.912: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W1105 23:21:09.917858 30 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should support CronJob API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a cronjob STEP: creating STEP: getting STEP: listing STEP: watching Nov 5 23:21:09.926: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Nov 5 23:21:09.929: INFO: starting watch STEP: patching STEP: updating Nov 5 23:21:09.943: INFO: waiting for watch events with expected annotations Nov 5 23:21:09.943: INFO: saw patched and updated annotations STEP: patching /status STEP: updating /status STEP: get /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:21:09.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-7953" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] CronJob should support CronJob API operations [Conformance]","total":-1,"completed":1,"skipped":27,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:21:09.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl W1105 23:21:09.892863 37 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 5 23:21:09.893: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 5 23:21:09.894: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if Kubernetes control plane services is included in cluster-info [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: validating cluster-info Nov 5 23:21:09.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8436 cluster-info' Nov 5 23:21:10.121: INFO: stderr: "" Nov 5 23:21:10.121: INFO: stdout: "\x1b[0;32mKubernetes control plane\x1b[0m is running at \x1b[0;33mhttps://10.10.190.202:6443\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:21:10.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8436" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance]","total":-1,"completed":1,"skipped":16,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:21:09.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api W1105 23:21:09.831162 32 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 5 23:21:09.831: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 5 23:21:09.835: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Nov 5 23:21:09.848: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d88bf67a-6009-476a-b579-e634905fad1e" in namespace "downward-api-4456" to be "Succeeded or Failed" Nov 5 23:21:09.857: INFO: Pod "downwardapi-volume-d88bf67a-6009-476a-b579-e634905fad1e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.847816ms Nov 5 23:21:11.862: INFO: Pod "downwardapi-volume-d88bf67a-6009-476a-b579-e634905fad1e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013717942s Nov 5 23:21:13.866: INFO: Pod "downwardapi-volume-d88bf67a-6009-476a-b579-e634905fad1e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017734152s Nov 5 23:21:15.870: INFO: Pod "downwardapi-volume-d88bf67a-6009-476a-b579-e634905fad1e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.021480878s STEP: Saw pod success Nov 5 23:21:15.870: INFO: Pod "downwardapi-volume-d88bf67a-6009-476a-b579-e634905fad1e" satisfied condition "Succeeded or Failed" Nov 5 23:21:15.872: INFO: Trying to get logs from node node1 pod downwardapi-volume-d88bf67a-6009-476a-b579-e634905fad1e container client-container: STEP: delete the pod Nov 5 23:21:15.892: INFO: Waiting for pod downwardapi-volume-d88bf67a-6009-476a-b579-e634905fad1e to disappear Nov 5 23:21:15.894: INFO: Pod downwardapi-volume-d88bf67a-6009-476a-b579-e634905fad1e no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:21:15.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4456" for this suite. • [SLOW TEST:6.099 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:21:09.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers W1105 23:21:09.844234 29 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 5 23:21:09.844: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 5 23:21:09.848: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test override all Nov 5 23:21:09.868: INFO: Waiting up to 5m0s for pod "client-containers-e4ea4ad3-17f5-4d63-9c5e-6063d51015c9" in namespace "containers-7683" to be "Succeeded or Failed" Nov 5 23:21:09.876: INFO: Pod "client-containers-e4ea4ad3-17f5-4d63-9c5e-6063d51015c9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.116114ms Nov 5 23:21:11.879: INFO: Pod "client-containers-e4ea4ad3-17f5-4d63-9c5e-6063d51015c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010646371s Nov 5 23:21:13.881: INFO: Pod "client-containers-e4ea4ad3-17f5-4d63-9c5e-6063d51015c9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013008453s Nov 5 23:21:15.887: INFO: Pod "client-containers-e4ea4ad3-17f5-4d63-9c5e-6063d51015c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.018303937s STEP: Saw pod success Nov 5 23:21:15.887: INFO: Pod "client-containers-e4ea4ad3-17f5-4d63-9c5e-6063d51015c9" satisfied condition "Succeeded or Failed" Nov 5 23:21:15.888: INFO: Trying to get logs from node node1 pod client-containers-e4ea4ad3-17f5-4d63-9c5e-6063d51015c9 container agnhost-container: STEP: delete the pod Nov 5 23:21:15.905: INFO: Waiting for pod client-containers-e4ea4ad3-17f5-4d63-9c5e-6063d51015c9 to disappear Nov 5 23:21:15.907: INFO: Pod client-containers-e4ea4ad3-17f5-4d63-9c5e-6063d51015c9 no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:21:15.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7683" for this suite. • [SLOW TEST:6.091 seconds] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0} S ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:21:09.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts W1105 23:21:09.858116 34 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 5 23:21:09.858: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 5 23:21:09.860: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should mount projected service account token [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test service account token: Nov 5 23:21:09.875: INFO: Waiting up to 5m0s for pod "test-pod-616ee0ce-2c1a-488b-ad18-ddc45a7e9ce2" in namespace "svcaccounts-901" to be "Succeeded or Failed" Nov 5 23:21:09.879: INFO: Pod "test-pod-616ee0ce-2c1a-488b-ad18-ddc45a7e9ce2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.132836ms Nov 5 23:21:11.882: INFO: Pod "test-pod-616ee0ce-2c1a-488b-ad18-ddc45a7e9ce2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006596715s Nov 5 23:21:13.885: INFO: Pod "test-pod-616ee0ce-2c1a-488b-ad18-ddc45a7e9ce2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010284451s Nov 5 23:21:15.890: INFO: Pod "test-pod-616ee0ce-2c1a-488b-ad18-ddc45a7e9ce2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014477292s STEP: Saw pod success Nov 5 23:21:15.890: INFO: Pod "test-pod-616ee0ce-2c1a-488b-ad18-ddc45a7e9ce2" satisfied condition "Succeeded or Failed" Nov 5 23:21:15.893: INFO: Trying to get logs from node node2 pod test-pod-616ee0ce-2c1a-488b-ad18-ddc45a7e9ce2 container agnhost-container: STEP: delete the pod Nov 5 23:21:15.909: INFO: Waiting for pod test-pod-616ee0ce-2c1a-488b-ad18-ddc45a7e9ce2 to disappear Nov 5 23:21:15.911: INFO: Pod test-pod-616ee0ce-2c1a-488b-ad18-ddc45a7e9ce2 no longer exists [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:21:15.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-901" for this suite. • [SLOW TEST:6.081 seconds] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount projected service account token [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ SS ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:21:15.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should delete a collection of pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create set of pods Nov 5 23:21:15.982: INFO: created test-pod-1 Nov 5 23:21:15.992: INFO: created test-pod-2 Nov 5 23:21:16.000: INFO: created test-pod-3 STEP: waiting for all 3 pods to be located STEP: waiting for all pods to be deleted [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:21:16.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2350" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Pods should delete a collection of pods [Conformance]","total":-1,"completed":2,"skipped":20,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:21:09.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns W1105 23:21:09.834801 24 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 5 23:21:09.835: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 5 23:21:09.836: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Nov 5 23:21:09.853: INFO: Created pod &Pod{ObjectMeta:{test-dns-nameservers dns-6457 91d1ebb5-ce38-4820-b275-b4af954be24b 34823 0 2021-11-05 23:21:09 +0000 UTC map[] map[kubernetes.io/psp:collectd] [] [] [{e2e.test Update v1 2021-11-05 23:21:09 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-8dkcq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8dkcq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 5 23:21:09.859: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:21:11.863: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:21:13.863: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:21:15.863: INFO: The status of Pod test-dns-nameservers is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Nov 5 23:21:15.863: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-6457 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 5 23:21:15.863: INFO: >>> kubeConfig: /root/.kube/config STEP: Verifying customized DNS server is configured on pod... Nov 5 23:21:15.970: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-6457 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 5 23:21:15.970: INFO: >>> kubeConfig: /root/.kube/config Nov 5 23:21:16.080: INFO: Deleting pod test-dns-nameservers... [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:21:16.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6457" for this suite. • [SLOW TEST:6.287 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should support configurable pod DNS nameservers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":-1,"completed":1,"skipped":12,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:21:09.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api W1105 23:21:09.919450 28 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 5 23:21:09.919: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 5 23:21:09.921: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Nov 5 23:21:09.934: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ea5aa7c6-bc6b-4289-a613-21ec6714b08c" in namespace "downward-api-2641" to be "Succeeded or Failed" Nov 5 23:21:09.937: INFO: Pod "downwardapi-volume-ea5aa7c6-bc6b-4289-a613-21ec6714b08c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.82401ms Nov 5 23:21:11.941: INFO: Pod "downwardapi-volume-ea5aa7c6-bc6b-4289-a613-21ec6714b08c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006788607s Nov 5 23:21:13.943: INFO: Pod "downwardapi-volume-ea5aa7c6-bc6b-4289-a613-21ec6714b08c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.00946124s Nov 5 23:21:15.946: INFO: Pod "downwardapi-volume-ea5aa7c6-bc6b-4289-a613-21ec6714b08c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01166261s Nov 5 23:21:17.953: INFO: Pod "downwardapi-volume-ea5aa7c6-bc6b-4289-a613-21ec6714b08c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.019339938s Nov 5 23:21:19.958: INFO: Pod "downwardapi-volume-ea5aa7c6-bc6b-4289-a613-21ec6714b08c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.023562255s STEP: Saw pod success Nov 5 23:21:19.958: INFO: Pod "downwardapi-volume-ea5aa7c6-bc6b-4289-a613-21ec6714b08c" satisfied condition "Succeeded or Failed" Nov 5 23:21:19.960: INFO: Trying to get logs from node node2 pod downwardapi-volume-ea5aa7c6-bc6b-4289-a613-21ec6714b08c container client-container: STEP: delete the pod Nov 5 23:21:20.084: INFO: Waiting for pod downwardapi-volume-ea5aa7c6-bc6b-4289-a613-21ec6714b08c to disappear Nov 5 23:21:20.086: INFO: Pod downwardapi-volume-ea5aa7c6-bc6b-4289-a613-21ec6714b08c no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:21:20.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2641" for this suite. • [SLOW TEST:10.198 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:21:09.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets W1105 23:21:09.936427 26 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 5 23:21:09.936: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 5 23:21:09.938: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-ff6990a0-79bb-4cf2-8d2e-d92752f13ce4 STEP: Creating a pod to test consume secrets Nov 5 23:21:09.954: INFO: Waiting up to 5m0s for pod "pod-secrets-732832c6-51b3-475a-9e36-8b47f1a9572f" in namespace "secrets-5550" to be "Succeeded or Failed" Nov 5 23:21:09.957: INFO: Pod "pod-secrets-732832c6-51b3-475a-9e36-8b47f1a9572f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.386751ms Nov 5 23:21:11.959: INFO: Pod "pod-secrets-732832c6-51b3-475a-9e36-8b47f1a9572f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0052606s Nov 5 23:21:13.963: INFO: Pod "pod-secrets-732832c6-51b3-475a-9e36-8b47f1a9572f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008599723s Nov 5 23:21:15.966: INFO: Pod "pod-secrets-732832c6-51b3-475a-9e36-8b47f1a9572f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012310176s Nov 5 23:21:17.971: INFO: Pod "pod-secrets-732832c6-51b3-475a-9e36-8b47f1a9572f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.016778538s Nov 5 23:21:19.974: INFO: Pod "pod-secrets-732832c6-51b3-475a-9e36-8b47f1a9572f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.019676674s STEP: Saw pod success Nov 5 23:21:19.974: INFO: Pod "pod-secrets-732832c6-51b3-475a-9e36-8b47f1a9572f" satisfied condition "Succeeded or Failed" Nov 5 23:21:19.976: INFO: Trying to get logs from node node2 pod pod-secrets-732832c6-51b3-475a-9e36-8b47f1a9572f container secret-volume-test: STEP: delete the pod Nov 5 23:21:20.086: INFO: Waiting for pod pod-secrets-732832c6-51b3-475a-9e36-8b47f1a9572f to disappear Nov 5 23:21:20.088: INFO: Pod pod-secrets-732832c6-51b3-475a-9e36-8b47f1a9572f no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:21:20.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5550" for this suite. • [SLOW TEST:10.181 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":30,"failed":0} S ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":38,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:21:20.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should have Endpoints and EndpointSlices pointing to API Server [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:21:20.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-7324" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","total":-1,"completed":2,"skipped":63,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:21:16.183: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-30a61330-e93e-468c-bef7-35bc4f4706bf STEP: Creating a pod to test consume secrets Nov 5 23:21:16.218: INFO: Waiting up to 5m0s for pod "pod-secrets-b5e763e1-c179-49f9-a989-1ec36b6802a5" in namespace "secrets-8357" to be "Succeeded or Failed" Nov 5 23:21:16.221: INFO: Pod "pod-secrets-b5e763e1-c179-49f9-a989-1ec36b6802a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.54232ms Nov 5 23:21:18.225: INFO: Pod "pod-secrets-b5e763e1-c179-49f9-a989-1ec36b6802a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006658852s Nov 5 23:21:20.230: INFO: Pod "pod-secrets-b5e763e1-c179-49f9-a989-1ec36b6802a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011182406s STEP: Saw pod success Nov 5 23:21:20.230: INFO: Pod "pod-secrets-b5e763e1-c179-49f9-a989-1ec36b6802a5" satisfied condition "Succeeded or Failed" Nov 5 23:21:20.232: INFO: Trying to get logs from node node1 pod pod-secrets-b5e763e1-c179-49f9-a989-1ec36b6802a5 container secret-volume-test: STEP: delete the pod Nov 5 23:21:20.246: INFO: Waiting for pod pod-secrets-b5e763e1-c179-49f9-a989-1ec36b6802a5 to disappear Nov 5 23:21:20.247: INFO: Pod pod-secrets-b5e763e1-c179-49f9-a989-1ec36b6802a5 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:21:20.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8357" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":54,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:21:15.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-bbc696ee-fec5-4f3d-b251-d34698f922a4 STEP: Creating a pod to test consume configMaps Nov 5 23:21:15.964: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5e4de123-80fe-4dcb-aad9-23bca29f5d57" in namespace "projected-8193" to be "Succeeded or Failed" Nov 5 23:21:15.966: INFO: Pod "pod-projected-configmaps-5e4de123-80fe-4dcb-aad9-23bca29f5d57": Phase="Pending", Reason="", readiness=false. Elapsed: 2.402354ms Nov 5 23:21:17.970: INFO: Pod "pod-projected-configmaps-5e4de123-80fe-4dcb-aad9-23bca29f5d57": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006552245s Nov 5 23:21:19.974: INFO: Pod "pod-projected-configmaps-5e4de123-80fe-4dcb-aad9-23bca29f5d57": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010329453s Nov 5 23:21:21.978: INFO: Pod "pod-projected-configmaps-5e4de123-80fe-4dcb-aad9-23bca29f5d57": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014385426s STEP: Saw pod success Nov 5 23:21:21.978: INFO: Pod "pod-projected-configmaps-5e4de123-80fe-4dcb-aad9-23bca29f5d57" satisfied condition "Succeeded or Failed" Nov 5 23:21:21.981: INFO: Trying to get logs from node node1 pod pod-projected-configmaps-5e4de123-80fe-4dcb-aad9-23bca29f5d57 container agnhost-container: STEP: delete the pod Nov 5 23:21:21.992: INFO: Waiting for pod pod-projected-configmaps-5e4de123-80fe-4dcb-aad9-23bca29f5d57 to disappear Nov 5 23:21:21.994: INFO: Pod pod-projected-configmaps-5e4de123-80fe-4dcb-aad9-23bca29f5d57 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:21:21.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8193" for this suite. • [SLOW TEST:6.070 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":8,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:21:09.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime W1105 23:21:09.976563 27 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 5 23:21:09.976: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 5 23:21:09.978: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Nov 5 23:21:23.044: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:21:23.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5339" for this suite. • [SLOW TEST:13.112 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:21:10.050: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-map-731324f7-7890-48ed-9d7a-976324c7fa47 STEP: Creating a pod to test consume configMaps Nov 5 23:21:10.140: INFO: Waiting up to 5m0s for pod "pod-configmaps-5843b1b3-af72-493f-9e5e-a2e7e695dcea" in namespace "configmap-7242" to be "Succeeded or Failed" Nov 5 23:21:10.142: INFO: Pod "pod-configmaps-5843b1b3-af72-493f-9e5e-a2e7e695dcea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.172047ms Nov 5 23:21:12.147: INFO: Pod "pod-configmaps-5843b1b3-af72-493f-9e5e-a2e7e695dcea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007534947s Nov 5 23:21:14.151: INFO: Pod "pod-configmaps-5843b1b3-af72-493f-9e5e-a2e7e695dcea": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011474576s Nov 5 23:21:16.154: INFO: Pod "pod-configmaps-5843b1b3-af72-493f-9e5e-a2e7e695dcea": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014371308s Nov 5 23:21:18.157: INFO: Pod "pod-configmaps-5843b1b3-af72-493f-9e5e-a2e7e695dcea": Phase="Pending", Reason="", readiness=false. Elapsed: 8.017656696s Nov 5 23:21:20.167: INFO: Pod "pod-configmaps-5843b1b3-af72-493f-9e5e-a2e7e695dcea": Phase="Pending", Reason="", readiness=false. Elapsed: 10.027557958s Nov 5 23:21:22.171: INFO: Pod "pod-configmaps-5843b1b3-af72-493f-9e5e-a2e7e695dcea": Phase="Running", Reason="", readiness=true. Elapsed: 12.031115952s Nov 5 23:21:24.177: INFO: Pod "pod-configmaps-5843b1b3-af72-493f-9e5e-a2e7e695dcea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.037372987s STEP: Saw pod success Nov 5 23:21:24.177: INFO: Pod "pod-configmaps-5843b1b3-af72-493f-9e5e-a2e7e695dcea" satisfied condition "Succeeded or Failed" Nov 5 23:21:24.180: INFO: Trying to get logs from node node2 pod pod-configmaps-5843b1b3-af72-493f-9e5e-a2e7e695dcea container agnhost-container: STEP: delete the pod Nov 5 23:21:24.196: INFO: Waiting for pod pod-configmaps-5843b1b3-af72-493f-9e5e-a2e7e695dcea to disappear Nov 5 23:21:24.198: INFO: Pod pod-configmaps-5843b1b3-af72-493f-9e5e-a2e7e695dcea no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:21:24.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7242" for this suite. • [SLOW TEST:14.157 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":66,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:21:16.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should add annotations for pods in rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating Agnhost RC Nov 5 23:21:16.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1997 create -f -' Nov 5 23:21:16.432: INFO: stderr: "" Nov 5 23:21:16.432: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Nov 5 23:21:17.436: INFO: Selector matched 1 pods for map[app:agnhost] Nov 5 23:21:17.436: INFO: Found 0 / 1 Nov 5 23:21:18.435: INFO: Selector matched 1 pods for map[app:agnhost] Nov 5 23:21:18.435: INFO: Found 0 / 1 Nov 5 23:21:19.435: INFO: Selector matched 1 pods for map[app:agnhost] Nov 5 23:21:19.435: INFO: Found 0 / 1 Nov 5 23:21:20.435: INFO: Selector matched 1 pods for map[app:agnhost] Nov 5 23:21:20.435: INFO: Found 0 / 1 Nov 5 23:21:21.436: INFO: Selector matched 1 pods for map[app:agnhost] Nov 5 23:21:21.436: INFO: Found 0 / 1 Nov 5 23:21:22.436: INFO: Selector matched 1 pods for map[app:agnhost] Nov 5 23:21:22.436: INFO: Found 0 / 1 Nov 5 23:21:23.436: INFO: Selector matched 1 pods for map[app:agnhost] Nov 5 23:21:23.436: INFO: Found 0 / 1 Nov 5 23:21:24.436: INFO: Selector matched 1 pods for map[app:agnhost] Nov 5 23:21:24.436: INFO: Found 0 / 1 Nov 5 23:21:25.435: INFO: Selector matched 1 pods for map[app:agnhost] Nov 5 23:21:25.435: INFO: Found 0 / 1 Nov 5 23:21:26.436: INFO: Selector matched 1 pods for map[app:agnhost] Nov 5 23:21:26.436: INFO: Found 1 / 1 Nov 5 23:21:26.436: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Nov 5 23:21:26.438: INFO: Selector matched 1 pods for map[app:agnhost] Nov 5 23:21:26.438: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Nov 5 23:21:26.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1997 patch pod agnhost-primary-crdmv -p {"metadata":{"annotations":{"x":"y"}}}' Nov 5 23:21:26.605: INFO: stderr: "" Nov 5 23:21:26.605: INFO: stdout: "pod/agnhost-primary-crdmv patched\n" STEP: checking annotations Nov 5 23:21:26.608: INFO: Selector matched 1 pods for map[app:agnhost] Nov 5 23:21:26.608: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:21:26.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1997" for this suite. • [SLOW TEST:10.523 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1460 should add annotations for pods in rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":-1,"completed":3,"skipped":45,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:21:22.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 5 23:21:22.134: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"d9c5767f-7a5c-49c4-87e7-eb2a43eb18d9", Controller:(*bool)(0xc004cc3cb2), BlockOwnerDeletion:(*bool)(0xc004cc3cb3)}} Nov 5 23:21:22.139: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"b2a6e522-8df3-4c64-8a47-bbe6019d3e35", Controller:(*bool)(0xc004da09ca), BlockOwnerDeletion:(*bool)(0xc004da09cb)}} Nov 5 23:21:22.145: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"bda03351-ec0e-4e58-a201-58350681c81d", Controller:(*bool)(0xc0049e651a), BlockOwnerDeletion:(*bool)(0xc0049e651b)}} [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:21:27.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6531" for this suite. • [SLOW TEST:5.082 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":-1,"completed":3,"skipped":41,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:21:09.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook W1105 23:21:09.848713 39 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 5 23:21:09.849: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 5 23:21:09.854: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Nov 5 23:21:10.537: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Nov 5 23:21:12.548: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751270, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751270, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751270, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751270, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 5 23:21:14.553: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751270, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751270, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751270, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751270, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 5 23:21:16.552: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751270, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751270, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751270, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751270, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 5 23:21:18.552: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751270, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751270, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751270, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751270, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 5 23:21:20.553: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751270, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751270, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751270, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751270, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 5 23:21:22.554: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751270, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751270, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751270, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751270, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 5 23:21:24.555: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751270, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751270, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751270, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751270, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 5 23:21:27.559: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:21:27.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2366" for this suite. STEP: Destroying namespace "webhook-2366-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:17.885 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":-1,"completed":1,"skipped":8,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:21:27.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename discovery STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/discovery.go:39 STEP: Setting up server cert [It] should validate PreferredVersion for each APIGroup [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 5 23:21:28.199: INFO: Checking APIGroup: apiregistration.k8s.io Nov 5 23:21:28.200: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 Nov 5 23:21:28.200: INFO: Versions found [{apiregistration.k8s.io/v1 v1} {apiregistration.k8s.io/v1beta1 v1beta1}] Nov 5 23:21:28.200: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 Nov 5 23:21:28.200: INFO: Checking APIGroup: apps Nov 5 23:21:28.201: INFO: PreferredVersion.GroupVersion: apps/v1 Nov 5 23:21:28.201: INFO: Versions found [{apps/v1 v1}] Nov 5 23:21:28.201: INFO: apps/v1 matches apps/v1 Nov 5 23:21:28.201: INFO: Checking APIGroup: events.k8s.io Nov 5 23:21:28.202: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 Nov 5 23:21:28.202: INFO: Versions found [{events.k8s.io/v1 v1} {events.k8s.io/v1beta1 v1beta1}] Nov 5 23:21:28.202: INFO: events.k8s.io/v1 matches events.k8s.io/v1 Nov 5 23:21:28.202: INFO: Checking APIGroup: authentication.k8s.io Nov 5 23:21:28.202: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 Nov 5 23:21:28.202: INFO: Versions found [{authentication.k8s.io/v1 v1} {authentication.k8s.io/v1beta1 v1beta1}] Nov 5 23:21:28.202: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 Nov 5 23:21:28.202: INFO: Checking APIGroup: authorization.k8s.io Nov 5 23:21:28.203: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 Nov 5 23:21:28.203: INFO: Versions found [{authorization.k8s.io/v1 v1} {authorization.k8s.io/v1beta1 v1beta1}] Nov 5 23:21:28.203: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 Nov 5 23:21:28.203: INFO: Checking APIGroup: autoscaling Nov 5 23:21:28.203: INFO: PreferredVersion.GroupVersion: autoscaling/v1 Nov 5 23:21:28.203: INFO: Versions found [{autoscaling/v1 v1} {autoscaling/v2beta1 v2beta1} {autoscaling/v2beta2 v2beta2}] Nov 5 23:21:28.203: INFO: autoscaling/v1 matches autoscaling/v1 Nov 5 23:21:28.203: INFO: Checking APIGroup: batch Nov 5 23:21:28.204: INFO: PreferredVersion.GroupVersion: batch/v1 Nov 5 23:21:28.204: INFO: Versions found [{batch/v1 v1} {batch/v1beta1 v1beta1}] Nov 5 23:21:28.204: INFO: batch/v1 matches batch/v1 Nov 5 23:21:28.204: INFO: Checking APIGroup: certificates.k8s.io Nov 5 23:21:28.205: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 Nov 5 23:21:28.205: INFO: Versions found [{certificates.k8s.io/v1 v1} {certificates.k8s.io/v1beta1 v1beta1}] Nov 5 23:21:28.205: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 Nov 5 23:21:28.205: INFO: Checking APIGroup: networking.k8s.io Nov 5 23:21:28.206: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 Nov 5 23:21:28.206: INFO: Versions found [{networking.k8s.io/v1 v1} {networking.k8s.io/v1beta1 v1beta1}] Nov 5 23:21:28.206: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 Nov 5 23:21:28.206: INFO: Checking APIGroup: extensions Nov 5 23:21:28.207: INFO: PreferredVersion.GroupVersion: extensions/v1beta1 Nov 5 23:21:28.208: INFO: Versions found [{extensions/v1beta1 v1beta1}] Nov 5 23:21:28.208: INFO: extensions/v1beta1 matches extensions/v1beta1 Nov 5 23:21:28.208: INFO: Checking APIGroup: policy Nov 5 23:21:28.210: INFO: PreferredVersion.GroupVersion: policy/v1 Nov 5 23:21:28.210: INFO: Versions found [{policy/v1 v1} {policy/v1beta1 v1beta1}] Nov 5 23:21:28.210: INFO: policy/v1 matches policy/v1 Nov 5 23:21:28.210: INFO: Checking APIGroup: rbac.authorization.k8s.io Nov 5 23:21:28.210: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 Nov 5 23:21:28.210: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1} {rbac.authorization.k8s.io/v1beta1 v1beta1}] Nov 5 23:21:28.210: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 Nov 5 23:21:28.210: INFO: Checking APIGroup: storage.k8s.io Nov 5 23:21:28.211: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 Nov 5 23:21:28.211: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] Nov 5 23:21:28.211: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 Nov 5 23:21:28.211: INFO: Checking APIGroup: admissionregistration.k8s.io Nov 5 23:21:28.212: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 Nov 5 23:21:28.212: INFO: Versions found [{admissionregistration.k8s.io/v1 v1} {admissionregistration.k8s.io/v1beta1 v1beta1}] Nov 5 23:21:28.212: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 Nov 5 23:21:28.212: INFO: Checking APIGroup: apiextensions.k8s.io Nov 5 23:21:28.213: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 Nov 5 23:21:28.213: INFO: Versions found [{apiextensions.k8s.io/v1 v1} {apiextensions.k8s.io/v1beta1 v1beta1}] Nov 5 23:21:28.213: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 Nov 5 23:21:28.213: INFO: Checking APIGroup: scheduling.k8s.io Nov 5 23:21:28.214: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 Nov 5 23:21:28.214: INFO: Versions found [{scheduling.k8s.io/v1 v1} {scheduling.k8s.io/v1beta1 v1beta1}] Nov 5 23:21:28.214: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 Nov 5 23:21:28.214: INFO: Checking APIGroup: coordination.k8s.io Nov 5 23:21:28.215: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 Nov 5 23:21:28.215: INFO: Versions found [{coordination.k8s.io/v1 v1} {coordination.k8s.io/v1beta1 v1beta1}] Nov 5 23:21:28.215: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 Nov 5 23:21:28.215: INFO: Checking APIGroup: node.k8s.io Nov 5 23:21:28.216: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1 Nov 5 23:21:28.216: INFO: Versions found [{node.k8s.io/v1 v1} {node.k8s.io/v1beta1 v1beta1}] Nov 5 23:21:28.216: INFO: node.k8s.io/v1 matches node.k8s.io/v1 Nov 5 23:21:28.216: INFO: Checking APIGroup: discovery.k8s.io Nov 5 23:21:28.217: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1 Nov 5 23:21:28.217: INFO: Versions found [{discovery.k8s.io/v1 v1} {discovery.k8s.io/v1beta1 v1beta1}] Nov 5 23:21:28.217: INFO: discovery.k8s.io/v1 matches discovery.k8s.io/v1 Nov 5 23:21:28.217: INFO: Checking APIGroup: flowcontrol.apiserver.k8s.io Nov 5 23:21:28.218: INFO: PreferredVersion.GroupVersion: flowcontrol.apiserver.k8s.io/v1beta1 Nov 5 23:21:28.218: INFO: Versions found [{flowcontrol.apiserver.k8s.io/v1beta1 v1beta1}] Nov 5 23:21:28.218: INFO: flowcontrol.apiserver.k8s.io/v1beta1 matches flowcontrol.apiserver.k8s.io/v1beta1 Nov 5 23:21:28.218: INFO: Checking APIGroup: intel.com Nov 5 23:21:28.219: INFO: PreferredVersion.GroupVersion: intel.com/v1 Nov 5 23:21:28.219: INFO: Versions found [{intel.com/v1 v1}] Nov 5 23:21:28.219: INFO: intel.com/v1 matches intel.com/v1 Nov 5 23:21:28.219: INFO: Checking APIGroup: k8s.cni.cncf.io Nov 5 23:21:28.219: INFO: PreferredVersion.GroupVersion: k8s.cni.cncf.io/v1 Nov 5 23:21:28.219: INFO: Versions found [{k8s.cni.cncf.io/v1 v1}] Nov 5 23:21:28.219: INFO: k8s.cni.cncf.io/v1 matches k8s.cni.cncf.io/v1 Nov 5 23:21:28.219: INFO: Checking APIGroup: monitoring.coreos.com Nov 5 23:21:28.221: INFO: PreferredVersion.GroupVersion: monitoring.coreos.com/v1 Nov 5 23:21:28.221: INFO: Versions found [{monitoring.coreos.com/v1 v1} {monitoring.coreos.com/v1alpha1 v1alpha1}] Nov 5 23:21:28.221: INFO: monitoring.coreos.com/v1 matches monitoring.coreos.com/v1 Nov 5 23:21:28.221: INFO: Checking APIGroup: telemetry.intel.com Nov 5 23:21:28.222: INFO: PreferredVersion.GroupVersion: telemetry.intel.com/v1alpha1 Nov 5 23:21:28.222: INFO: Versions found [{telemetry.intel.com/v1alpha1 v1alpha1}] Nov 5 23:21:28.222: INFO: telemetry.intel.com/v1alpha1 matches telemetry.intel.com/v1alpha1 Nov 5 23:21:28.222: INFO: Checking APIGroup: custom.metrics.k8s.io Nov 5 23:21:28.222: INFO: PreferredVersion.GroupVersion: custom.metrics.k8s.io/v1beta1 Nov 5 23:21:28.223: INFO: Versions found [{custom.metrics.k8s.io/v1beta1 v1beta1}] Nov 5 23:21:28.223: INFO: custom.metrics.k8s.io/v1beta1 matches custom.metrics.k8s.io/v1beta1 [AfterEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:21:28.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "discovery-2110" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":-1,"completed":2,"skipped":42,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:21:28.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:21:28.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7818" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":52,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:21:24.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-90d36ad3-e1e0-4114-bc1f-42be33459c30 STEP: Creating a pod to test consume secrets Nov 5 23:21:24.277: INFO: Waiting up to 5m0s for pod "pod-secrets-77d6e983-7640-430b-8fcb-b62b3c971ea8" in namespace "secrets-4135" to be "Succeeded or Failed" Nov 5 23:21:24.279: INFO: Pod "pod-secrets-77d6e983-7640-430b-8fcb-b62b3c971ea8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.363254ms Nov 5 23:21:26.283: INFO: Pod "pod-secrets-77d6e983-7640-430b-8fcb-b62b3c971ea8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005833897s Nov 5 23:21:28.286: INFO: Pod "pod-secrets-77d6e983-7640-430b-8fcb-b62b3c971ea8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009128522s STEP: Saw pod success Nov 5 23:21:28.286: INFO: Pod "pod-secrets-77d6e983-7640-430b-8fcb-b62b3c971ea8" satisfied condition "Succeeded or Failed" Nov 5 23:21:28.289: INFO: Trying to get logs from node node1 pod pod-secrets-77d6e983-7640-430b-8fcb-b62b3c971ea8 container secret-env-test: STEP: delete the pod Nov 5 23:21:28.305: INFO: Waiting for pod pod-secrets-77d6e983-7640-430b-8fcb-b62b3c971ea8 to disappear Nov 5 23:21:28.307: INFO: Pod pod-secrets-77d6e983-7640-430b-8fcb-b62b3c971ea8 no longer exists [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:21:28.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4135" for this suite. •SS ------------------------------ {"msg":"PASSED [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":80,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:21:10.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod Nov 5 23:21:10.275: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:21:30.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5586" for this suite. • [SLOW TEST:20.051 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":-1,"completed":2,"skipped":34,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:21:20.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] Deployment should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 5 23:21:20.259: INFO: Creating simple deployment test-new-deployment Nov 5 23:21:20.267: INFO: deployment "test-new-deployment" doesn't have the required revision set Nov 5 23:21:22.275: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751280, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751280, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751280, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751280, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-847dcfb7fb\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 5 23:21:24.282: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751280, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751280, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751280, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751280, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-847dcfb7fb\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 5 23:21:26.282: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751280, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751280, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751280, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751280, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-847dcfb7fb\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 5 23:21:28.280: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751280, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751280, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751280, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751280, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-847dcfb7fb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the deployment Spec.Replicas was modified STEP: Patch a scale subresource [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Nov 5 23:21:30.304: INFO: Deployment "test-new-deployment": &Deployment{ObjectMeta:{test-new-deployment deployment-6992 ce16b1f4-8f0f-477a-872d-29f1adcc452f 35553 3 2021-11-05 23:21:20 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2021-11-05 23:21:20 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-11-05 23:21:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0048ed968 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-new-deployment-847dcfb7fb" has successfully progressed.,LastUpdateTime:2021-11-05 23:21:30 +0000 UTC,LastTransitionTime:2021-11-05 23:21:20 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2021-11-05 23:21:30 +0000 UTC,LastTransitionTime:2021-11-05 23:21:30 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Nov 5 23:21:30.307: INFO: New ReplicaSet "test-new-deployment-847dcfb7fb" of Deployment "test-new-deployment": &ReplicaSet{ObjectMeta:{test-new-deployment-847dcfb7fb deployment-6992 3c63e9d6-d9d3-4cdf-b738-14a72df05d9b 35555 3 2021-11-05 23:21:20 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[deployment.kubernetes.io/desired-replicas:4 deployment.kubernetes.io/max-replicas:5 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-new-deployment ce16b1f4-8f0f-477a-872d-29f1adcc452f 0xc0048edd57 0xc0048edd58}] [] [{kube-controller-manager Update apps/v1 2021-11-05 23:21:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ce16b1f4-8f0f-477a-872d-29f1adcc452f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 847dcfb7fb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0048eddc8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Nov 5 23:21:30.310: INFO: Pod "test-new-deployment-847dcfb7fb-gztdd" is not available: &Pod{ObjectMeta:{test-new-deployment-847dcfb7fb-gztdd test-new-deployment-847dcfb7fb- deployment-6992 d65088c6-19d4-4971-bc36-eef1f6261058 35557 0 2021-11-05 23:21:30 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-new-deployment-847dcfb7fb 3c63e9d6-d9d3-4cdf-b738-14a72df05d9b 0xc004e6e15f 0xc004e6e170}] [] [{kube-controller-manager Update v1 2021-11-05 23:21:30 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3c63e9d6-d9d3-4cdf-b738-14a72df05d9b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-11-05 23:21:30 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-7t49s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7t49s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:21:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:21:30 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:21:30 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:21:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:,StartTime:2021-11-05 23:21:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 5 23:21:30.311: INFO: Pod "test-new-deployment-847dcfb7fb-z8qwt" is available: &Pod{ObjectMeta:{test-new-deployment-847dcfb7fb-z8qwt test-new-deployment-847dcfb7fb- deployment-6992 b924e673-f055-483e-8297-055e6d663e52 35536 0 2021-11-05 23:21:20 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.207" ], "mac": "3e:23:bb:69:5a:a3", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.207" ], "mac": "3e:23:bb:69:5a:a3", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-new-deployment-847dcfb7fb 3c63e9d6-d9d3-4cdf-b738-14a72df05d9b 0xc004e6e31f 0xc004e6e330}] [] [{kube-controller-manager Update v1 2021-11-05 23:21:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3c63e9d6-d9d3-4cdf-b738-14a72df05d9b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-11-05 23:21:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-11-05 23:21:30 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.207\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-kz2bh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kz2bh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:21:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:21:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:21:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:21:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.207,StartTime:2021-11-05 23:21:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-11-05 23:21:29 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://2a930f9e1daf2df4f9a77603baa4ca75aa4217a5f7b33a6acb15501ab196b6d1,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.207,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:21:30.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6992" for this suite. • [SLOW TEST:10.081 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Deployment should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]","total":-1,"completed":3,"skipped":80,"failed":0} SSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:21:30.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should support --unix-socket=/path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Starting the proxy Nov 5 23:21:30.283: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2590 proxy --unix-socket=/tmp/kubectl-proxy-unix830009727/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:21:30.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2590" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":-1,"completed":3,"skipped":53,"failed":0} SSS ------------------------------ [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:21:30.329: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Pods Set QOS Class /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:150 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:21:30.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-58" for this suite. •SS ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":-1,"completed":4,"skipped":83,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:21:20.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Nov 5 23:21:20.333: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6c718918-448b-47d4-b3d4-368a35517f7f" in namespace "downward-api-3070" to be "Succeeded or Failed" Nov 5 23:21:20.335: INFO: Pod "downwardapi-volume-6c718918-448b-47d4-b3d4-368a35517f7f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.286046ms Nov 5 23:21:22.339: INFO: Pod "downwardapi-volume-6c718918-448b-47d4-b3d4-368a35517f7f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005597195s Nov 5 23:21:24.342: INFO: Pod "downwardapi-volume-6c718918-448b-47d4-b3d4-368a35517f7f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009045863s Nov 5 23:21:26.345: INFO: Pod "downwardapi-volume-6c718918-448b-47d4-b3d4-368a35517f7f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012269411s Nov 5 23:21:28.351: INFO: Pod "downwardapi-volume-6c718918-448b-47d4-b3d4-368a35517f7f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.017684066s Nov 5 23:21:30.353: INFO: Pod "downwardapi-volume-6c718918-448b-47d4-b3d4-368a35517f7f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.020201849s STEP: Saw pod success Nov 5 23:21:30.353: INFO: Pod "downwardapi-volume-6c718918-448b-47d4-b3d4-368a35517f7f" satisfied condition "Succeeded or Failed" Nov 5 23:21:30.358: INFO: Trying to get logs from node node2 pod downwardapi-volume-6c718918-448b-47d4-b3d4-368a35517f7f container client-container: STEP: delete the pod Nov 5 23:21:30.449: INFO: Waiting for pod downwardapi-volume-6c718918-448b-47d4-b3d4-368a35517f7f to disappear Nov 5 23:21:30.450: INFO: Pod downwardapi-volume-6c718918-448b-47d4-b3d4-368a35517f7f no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:21:30.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3070" for this suite. • [SLOW TEST:10.157 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":74,"failed":0} S ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:21:15.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating Agnhost RC Nov 5 23:21:15.981: INFO: namespace kubectl-8991 Nov 5 23:21:15.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8991 create -f -' Nov 5 23:21:16.334: INFO: stderr: "" Nov 5 23:21:16.334: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Nov 5 23:21:17.338: INFO: Selector matched 1 pods for map[app:agnhost] Nov 5 23:21:17.338: INFO: Found 0 / 1 Nov 5 23:21:18.337: INFO: Selector matched 1 pods for map[app:agnhost] Nov 5 23:21:18.337: INFO: Found 0 / 1 Nov 5 23:21:19.338: INFO: Selector matched 1 pods for map[app:agnhost] Nov 5 23:21:19.338: INFO: Found 0 / 1 Nov 5 23:21:20.337: INFO: Selector matched 1 pods for map[app:agnhost] Nov 5 23:21:20.337: INFO: Found 0 / 1 Nov 5 23:21:21.338: INFO: Selector matched 1 pods for map[app:agnhost] Nov 5 23:21:21.338: INFO: Found 0 / 1 Nov 5 23:21:22.340: INFO: Selector matched 1 pods for map[app:agnhost] Nov 5 23:21:22.340: INFO: Found 0 / 1 Nov 5 23:21:23.337: INFO: Selector matched 1 pods for map[app:agnhost] Nov 5 23:21:23.337: INFO: Found 0 / 1 Nov 5 23:21:24.338: INFO: Selector matched 1 pods for map[app:agnhost] Nov 5 23:21:24.338: INFO: Found 0 / 1 Nov 5 23:21:25.337: INFO: Selector matched 1 pods for map[app:agnhost] Nov 5 23:21:25.337: INFO: Found 0 / 1 Nov 5 23:21:26.337: INFO: Selector matched 1 pods for map[app:agnhost] Nov 5 23:21:26.337: INFO: Found 0 / 1 Nov 5 23:21:27.337: INFO: Selector matched 1 pods for map[app:agnhost] Nov 5 23:21:27.337: INFO: Found 1 / 1 Nov 5 23:21:27.337: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Nov 5 23:21:27.340: INFO: Selector matched 1 pods for map[app:agnhost] Nov 5 23:21:27.340: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Nov 5 23:21:27.340: INFO: wait on agnhost-primary startup in kubectl-8991 Nov 5 23:21:27.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8991 logs agnhost-primary-7gs6g agnhost-primary' Nov 5 23:21:27.512: INFO: stderr: "" Nov 5 23:21:27.512: INFO: stdout: "Paused\n" STEP: exposing RC Nov 5 23:21:27.512: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8991 expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379' Nov 5 23:21:27.725: INFO: stderr: "" Nov 5 23:21:27.725: INFO: stdout: "service/rm2 exposed\n" Nov 5 23:21:27.727: INFO: Service rm2 in namespace kubectl-8991 found. STEP: exposing service Nov 5 23:21:29.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8991 expose service rm2 --name=rm3 --port=2345 --target-port=6379' Nov 5 23:21:29.899: INFO: stderr: "" Nov 5 23:21:29.899: INFO: stdout: "service/rm3 exposed\n" Nov 5 23:21:29.901: INFO: Service rm3 in namespace kubectl-8991 found. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:21:31.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8991" for this suite. • [SLOW TEST:15.954 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1223 should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":-1,"completed":2,"skipped":18,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:21:20.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 5 23:21:20.143: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Nov 5 23:21:28.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-933 --namespace=crd-publish-openapi-933 create -f -' Nov 5 23:21:29.185: INFO: stderr: "" Nov 5 23:21:29.185: INFO: stdout: "e2e-test-crd-publish-openapi-9493-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Nov 5 23:21:29.186: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-933 --namespace=crd-publish-openapi-933 delete e2e-test-crd-publish-openapi-9493-crds test-foo' Nov 5 23:21:29.349: INFO: stderr: "" Nov 5 23:21:29.349: INFO: stdout: "e2e-test-crd-publish-openapi-9493-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Nov 5 23:21:29.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-933 --namespace=crd-publish-openapi-933 apply -f -' Nov 5 23:21:29.675: INFO: stderr: "" Nov 5 23:21:29.675: INFO: stdout: "e2e-test-crd-publish-openapi-9493-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Nov 5 23:21:29.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-933 --namespace=crd-publish-openapi-933 delete e2e-test-crd-publish-openapi-9493-crds test-foo' Nov 5 23:21:29.848: INFO: stderr: "" Nov 5 23:21:29.848: INFO: stdout: "e2e-test-crd-publish-openapi-9493-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Nov 5 23:21:29.848: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-933 --namespace=crd-publish-openapi-933 create -f -' Nov 5 23:21:30.138: INFO: rc: 1 Nov 5 23:21:30.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-933 --namespace=crd-publish-openapi-933 apply -f -' Nov 5 23:21:30.453: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Nov 5 23:21:30.453: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-933 --namespace=crd-publish-openapi-933 create -f -' Nov 5 23:21:30.728: INFO: rc: 1 Nov 5 23:21:30.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-933 --namespace=crd-publish-openapi-933 apply -f -' Nov 5 23:21:30.989: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Nov 5 23:21:30.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-933 explain e2e-test-crd-publish-openapi-9493-crds' Nov 5 23:21:31.325: INFO: stderr: "" Nov 5 23:21:31.325: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9493-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Nov 5 23:21:31.326: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-933 explain e2e-test-crd-publish-openapi-9493-crds.metadata' Nov 5 23:21:31.663: INFO: stderr: "" Nov 5 23:21:31.663: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9493-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n NOT return a 409 - instead, it will either return 201 Created or 500 with\n Reason ServerTimeout indicating a unique name could not be found in the\n time allotted, and the client should retry (optionally after the time\n indicated in the Retry-After header).\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only.\n\n DEPRECATED Kubernetes will stop propagating this field in 1.20 release and\n the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Nov 5 23:21:31.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-933 explain e2e-test-crd-publish-openapi-9493-crds.spec' Nov 5 23:21:31.962: INFO: stderr: "" Nov 5 23:21:31.962: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9493-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Nov 5 23:21:31.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-933 explain e2e-test-crd-publish-openapi-9493-crds.spec.bars' Nov 5 23:21:32.287: INFO: stderr: "" Nov 5 23:21:32.287: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9493-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Nov 5 23:21:32.288: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-933 explain e2e-test-crd-publish-openapi-9493-crds.spec.bars2' Nov 5 23:21:32.632: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:21:36.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-933" for this suite. • [SLOW TEST:16.550 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":-1,"completed":2,"skipped":37,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:21:28.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Nov 5 23:21:28.883: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Nov 5 23:21:30.892: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751288, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751288, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751288, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751288, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 5 23:21:33.901: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 5 23:21:33.905: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:21:41.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-2915" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:13.113 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":-1,"completed":4,"skipped":101,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:21:41.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:21:41.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1694" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":-1,"completed":5,"skipped":113,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:21:36.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Nov 5 23:21:37.467: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Nov 5 23:21:39.474: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751297, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751297, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751297, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751297, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 5 23:21:42.485: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:21:43.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-198" for this suite. STEP: Destroying namespace "webhook-198-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.884 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":-1,"completed":3,"skipped":42,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:21:30.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1308 STEP: creating the pod Nov 5 23:21:30.424: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1491 create -f -' Nov 5 23:21:30.755: INFO: stderr: "" Nov 5 23:21:30.755: INFO: stdout: "pod/pause created\n" Nov 5 23:21:30.755: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Nov 5 23:21:30.755: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-1491" to be "running and ready" Nov 5 23:21:30.757: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080723ms Nov 5 23:21:32.760: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005181277s Nov 5 23:21:34.763: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007869098s Nov 5 23:21:36.766: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.010757284s Nov 5 23:21:38.769: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.014062173s Nov 5 23:21:40.772: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 10.017279033s Nov 5 23:21:42.776: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 12.021249183s Nov 5 23:21:42.776: INFO: Pod "pause" satisfied condition "running and ready" Nov 5 23:21:42.776: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: adding the label testing-label with value testing-label-value to a pod Nov 5 23:21:42.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1491 label pods pause testing-label=testing-label-value' Nov 5 23:21:42.942: INFO: stderr: "" Nov 5 23:21:42.942: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Nov 5 23:21:42.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1491 get pod pause -L testing-label' Nov 5 23:21:43.097: INFO: stderr: "" Nov 5 23:21:43.097: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 13s testing-label-value\n" STEP: removing the label testing-label of a pod Nov 5 23:21:43.097: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1491 label pods pause testing-label-' Nov 5 23:21:43.271: INFO: stderr: "" Nov 5 23:21:43.271: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Nov 5 23:21:43.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1491 get pod pause -L testing-label' Nov 5 23:21:43.423: INFO: stderr: "" Nov 5 23:21:43.424: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 13s \n" [AfterEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1314 STEP: using delete to clean up resources Nov 5 23:21:43.424: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1491 delete --grace-period=0 --force -f -' Nov 5 23:21:43.546: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Nov 5 23:21:43.546: INFO: stdout: "pod \"pause\" force deleted\n" Nov 5 23:21:43.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1491 get rc,svc -l name=pause --no-headers' Nov 5 23:21:43.737: INFO: stderr: "No resources found in kubectl-1491 namespace.\n" Nov 5 23:21:43.737: INFO: stdout: "" Nov 5 23:21:43.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1491 get pods -l name=pause -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Nov 5 23:21:43.887: INFO: stderr: "" Nov 5 23:21:43.887: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:21:43.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1491" for this suite. • [SLOW TEST:13.491 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1306 should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":-1,"completed":4,"skipped":76,"failed":0} S ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:21:43.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should complete a service status lifecycle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Service STEP: watching for the Service to be added Nov 5 23:21:43.927: INFO: Found Service test-service-hrl4n in namespace services-2392 with labels: map[test-service-static:true] & ports [{http TCP 80 {0 80 } 0}] Nov 5 23:21:43.927: INFO: Service test-service-hrl4n created STEP: Getting /status Nov 5 23:21:43.930: INFO: Service test-service-hrl4n has LoadBalancer: {[]} STEP: patching the ServiceStatus STEP: watching for the Service to be patched Nov 5 23:21:43.935: INFO: observed Service test-service-hrl4n in namespace services-2392 with annotations: map[] & LoadBalancer: {[]} Nov 5 23:21:43.935: INFO: Found Service test-service-hrl4n in namespace services-2392 with annotations: map[patchedstatus:true] & LoadBalancer: {[{203.0.113.1 []}]} Nov 5 23:21:43.935: INFO: Service test-service-hrl4n has service status patched STEP: updating the ServiceStatus Nov 5 23:21:43.940: INFO: updatedStatus.Conditions: []v1.Condition{v1.Condition{Type:"StatusUpdate", Status:"True", ObservedGeneration:0, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"E2E", Message:"Set from e2e test"}} STEP: watching for the Service to be updated Nov 5 23:21:43.942: INFO: Observed Service test-service-hrl4n in namespace services-2392 with annotations: map[] & Conditions: {[]} Nov 5 23:21:43.942: INFO: Observed event: &Service{ObjectMeta:{test-service-hrl4n services-2392 515e2da5-eefb-493f-a2b0-12aebb748856 36104 0 2021-11-05 23:21:43 +0000 UTC map[test-service-static:true] map[patchedstatus:true] [] [] [{e2e.test Update v1 2021-11-05 23:21:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:patchedstatus":{}},"f:labels":{".":{},"f:test-service-static":{}}},"f:spec":{"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:sessionAffinity":{},"f:type":{}},"f:status":{"f:loadBalancer":{"f:ingress":{}}}}}]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{0 80 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.233.28.148,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,TopologyKeys:[],IPFamilyPolicy:*SingleStack,ClusterIPs:[10.233.28.148],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{LoadBalancerIngress{IP:203.0.113.1,Hostname:,Ports:[]PortStatus{},},},},Conditions:[]Condition{},},} Nov 5 23:21:43.942: INFO: Found Service test-service-hrl4n in namespace services-2392 with annotations: map[patchedstatus:true] & Conditions: [{StatusUpdate True 0 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] Nov 5 23:21:43.942: INFO: Service test-service-hrl4n has service status updated STEP: patching the service STEP: watching for the Service to be patched Nov 5 23:21:43.952: INFO: observed Service test-service-hrl4n in namespace services-2392 with labels: map[test-service-static:true] Nov 5 23:21:43.952: INFO: observed Service test-service-hrl4n in namespace services-2392 with labels: map[test-service-static:true] Nov 5 23:21:43.952: INFO: observed Service test-service-hrl4n in namespace services-2392 with labels: map[test-service-static:true] Nov 5 23:21:43.952: INFO: Found Service test-service-hrl4n in namespace services-2392 with labels: map[test-service:patched test-service-static:true] Nov 5 23:21:43.952: INFO: Service test-service-hrl4n patched STEP: deleting the service STEP: watching for the Service to be deleted Nov 5 23:21:43.960: INFO: Observed event: ADDED Nov 5 23:21:43.961: INFO: Observed event: MODIFIED Nov 5 23:21:43.961: INFO: Observed event: MODIFIED Nov 5 23:21:43.961: INFO: Observed event: MODIFIED Nov 5 23:21:43.961: INFO: Found Service test-service-hrl4n in namespace services-2392 with labels: map[test-service:patched test-service-static:true] & annotations: map[patchedstatus:true] Nov 5 23:21:43.961: INFO: Service test-service-hrl4n deleted [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:21:43.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2392" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • ------------------------------ {"msg":"PASSED [sig-network] Services should complete a service status lifecycle [Conformance]","total":-1,"completed":5,"skipped":77,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:21:26.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Nov 5 23:21:26.702: INFO: >>> kubeConfig: /root/.kube/config Nov 5 23:21:34.743: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:21:54.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3164" for this suite. • [SLOW TEST:27.362 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":-1,"completed":4,"skipped":73,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:21:54.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should support creating EndpointSlice API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/discovery.k8s.io STEP: getting /apis/discovery.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Nov 5 23:21:54.114: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Nov 5 23:21:54.117: INFO: starting watch STEP: patching STEP: updating Nov 5 23:21:54.126: INFO: waiting for watch events with expected annotations Nov 5 23:21:54.126: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:21:54.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-9229" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","total":-1,"completed":5,"skipped":91,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:21:28.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostport STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/hostport.go:47 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to create a pod(pod1) with hostport 54323 and hostIP 127.0.0.1 and expect scheduled Nov 5 23:21:28.356: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:21:30.359: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:21:32.362: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:21:34.359: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:21:36.361: INFO: The status of Pod pod1 is Running (Ready = true) STEP: Trying to create another pod(pod2) with hostport 54323 but hostIP 10.10.190.208 on the node which pod1 resides and expect scheduled Nov 5 23:21:36.374: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:21:38.377: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:21:40.378: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:21:42.378: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:21:44.377: INFO: The status of Pod pod2 is Running (Ready = true) STEP: Trying to create a third pod(pod3) with hostport 54323, hostIP 10.10.190.208 but use UDP protocol on the node which pod2 resides Nov 5 23:21:44.389: INFO: The status of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:21:46.393: INFO: The status of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:21:48.392: INFO: The status of Pod pod3 is Running (Ready = true) Nov 5 23:21:48.403: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:21:50.408: INFO: The status of Pod e2e-host-exec is Running (Ready = true) STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54323 Nov 5 23:21:50.410: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 10.10.190.208 http://127.0.0.1:54323/hostname] Namespace:hostport-6950 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 5 23:21:50.410: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 10.10.190.208, port: 54323 Nov 5 23:21:50.528: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://10.10.190.208:54323/hostname] Namespace:hostport-6950 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 5 23:21:50.528: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 10.10.190.208, port: 54323 UDP Nov 5 23:21:50.644: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 10.10.190.208 54323] Namespace:hostport-6950 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 5 23:21:50.644: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:21:55.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostport-6950" for this suite. • [SLOW TEST:27.426 seconds] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]","total":-1,"completed":4,"skipped":81,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:21:30.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Performing setup for networking test in namespace pod-network-test-6278 STEP: creating a selector STEP: Creating the service pods in kubernetes Nov 5 23:21:30.480: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 5 23:21:30.508: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:21:32.516: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:21:34.512: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 5 23:21:36.512: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 5 23:21:38.511: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 5 23:21:40.513: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 5 23:21:42.514: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 5 23:21:44.512: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 5 23:21:46.511: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 5 23:21:48.511: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 5 23:21:50.512: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 5 23:21:52.512: INFO: The status of Pod netserver-0 is Running (Ready = true) Nov 5 23:21:52.516: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Nov 5 23:21:56.556: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Nov 5 23:21:56.556: INFO: Going to poll 10.244.3.167 on port 8080 at least 0 times, with a maximum of 34 tries before failing Nov 5 23:21:56.558: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.3.167:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6278 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 5 23:21:56.558: INFO: >>> kubeConfig: /root/.kube/config Nov 5 23:21:56.686: INFO: Found all 1 expected endpoints: [netserver-0] Nov 5 23:21:56.686: INFO: Going to poll 10.244.4.211 on port 8080 at least 0 times, with a maximum of 34 tries before failing Nov 5 23:21:56.689: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.4.211:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6278 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 5 23:21:56.689: INFO: >>> kubeConfig: /root/.kube/config Nov 5 23:21:56.796: INFO: Found all 1 expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:21:56.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6278" for this suite. • [SLOW TEST:26.346 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":75,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:21:43.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1514 [It] should create a pod from an image when restart is Never [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Nov 5 23:21:44.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7893 run e2e-test-httpd-pod --restart=Never --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1' Nov 5 23:21:44.148: INFO: stderr: "" Nov 5 23:21:44.148: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1518 Nov 5 23:21:44.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7893 delete pods e2e-test-httpd-pod' Nov 5 23:21:58.688: INFO: stderr: "" Nov 5 23:21:58.688: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:21:58.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7893" for this suite. • [SLOW TEST:14.717 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1511 should create a pod from an image when restart is Never [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":-1,"completed":6,"skipped":82,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:21:58.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of pod templates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create set of pod templates Nov 5 23:21:58.765: INFO: created test-podtemplate-1 Nov 5 23:21:58.768: INFO: created test-podtemplate-2 Nov 5 23:21:58.771: INFO: created test-podtemplate-3 STEP: get a list of pod templates with a label in the current namespace STEP: delete collection of pod templates Nov 5 23:21:58.773: INFO: requesting DeleteCollection of pod templates STEP: check that the list of pod templates matches the requested quantity Nov 5 23:21:58.790: INFO: requesting list of pod templates to confirm quantity [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:21:58.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-6077" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":-1,"completed":7,"skipped":102,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:21:58.828: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-map-2db9accd-258a-4258-8208-924b05b36786 STEP: Creating a pod to test consume secrets Nov 5 23:21:58.862: INFO: Waiting up to 5m0s for pod "pod-secrets-1af645fc-b6c9-43b1-a3cf-89c9282bcbe8" in namespace "secrets-5807" to be "Succeeded or Failed" Nov 5 23:21:58.865: INFO: Pod "pod-secrets-1af645fc-b6c9-43b1-a3cf-89c9282bcbe8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.598422ms Nov 5 23:22:00.870: INFO: Pod "pod-secrets-1af645fc-b6c9-43b1-a3cf-89c9282bcbe8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008544151s Nov 5 23:22:02.875: INFO: Pod "pod-secrets-1af645fc-b6c9-43b1-a3cf-89c9282bcbe8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012696021s STEP: Saw pod success Nov 5 23:22:02.875: INFO: Pod "pod-secrets-1af645fc-b6c9-43b1-a3cf-89c9282bcbe8" satisfied condition "Succeeded or Failed" Nov 5 23:22:02.877: INFO: Trying to get logs from node node1 pod pod-secrets-1af645fc-b6c9-43b1-a3cf-89c9282bcbe8 container secret-volume-test: STEP: delete the pod Nov 5 23:22:02.892: INFO: Waiting for pod pod-secrets-1af645fc-b6c9-43b1-a3cf-89c9282bcbe8 to disappear Nov 5 23:22:02.894: INFO: Pod pod-secrets-1af645fc-b6c9-43b1-a3cf-89c9282bcbe8 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:22:02.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5807" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":110,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:22:02.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-map-639ff6c5-ff06-42ff-bb8d-9829a2ac77a7 STEP: Creating a pod to test consume configMaps Nov 5 23:22:03.014: INFO: Waiting up to 5m0s for pod "pod-configmaps-03217833-ad8c-4550-b812-3ce778f86e14" in namespace "configmap-4218" to be "Succeeded or Failed" Nov 5 23:22:03.017: INFO: Pod "pod-configmaps-03217833-ad8c-4550-b812-3ce778f86e14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.714299ms Nov 5 23:22:05.021: INFO: Pod "pod-configmaps-03217833-ad8c-4550-b812-3ce778f86e14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006702873s Nov 5 23:22:07.025: INFO: Pod "pod-configmaps-03217833-ad8c-4550-b812-3ce778f86e14": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010917605s STEP: Saw pod success Nov 5 23:22:07.025: INFO: Pod "pod-configmaps-03217833-ad8c-4550-b812-3ce778f86e14" satisfied condition "Succeeded or Failed" Nov 5 23:22:07.028: INFO: Trying to get logs from node node1 pod pod-configmaps-03217833-ad8c-4550-b812-3ce778f86e14 container agnhost-container: STEP: delete the pod Nov 5 23:22:07.042: INFO: Waiting for pod pod-configmaps-03217833-ad8c-4550-b812-3ce778f86e14 to disappear Nov 5 23:22:07.044: INFO: Pod pod-configmaps-03217833-ad8c-4550-b812-3ce778f86e14 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:22:07.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4218" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":149,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:22:07.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should run through the lifecycle of Pods and PodStatus [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Pod with a static label STEP: watching for Pod to be ready Nov 5 23:22:07.171: INFO: observed Pod pod-test in namespace pods-7470 in phase Pending with labels: map[test-pod-static:true] & conditions [] Nov 5 23:22:07.172: INFO: observed Pod pod-test in namespace pods-7470 in phase Pending with labels: map[test-pod-static:true] & conditions [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:22:07 +0000 UTC }] Nov 5 23:22:07.180: INFO: observed Pod pod-test in namespace pods-7470 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:22:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:22:07 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:22:07 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:22:07 +0000 UTC }] Nov 5 23:22:10.372: INFO: observed Pod pod-test in namespace pods-7470 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:22:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:22:07 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:22:07 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:22:07 +0000 UTC }] Nov 5 23:22:11.275: INFO: Found Pod pod-test in namespace pods-7470 in phase Running with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:22:07 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:22:11 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:22:11 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:22:07 +0000 UTC }] STEP: patching the Pod with a new Label and updated data Nov 5 23:22:11.287: INFO: observed event type ADDED STEP: getting the Pod and ensuring that it's patched STEP: getting the PodStatus STEP: replacing the Pod's status Ready condition to False STEP: check the Pod again to ensure its Ready conditions are False STEP: deleting the Pod via a Collection with a LabelSelector STEP: watching for the Pod to be deleted Nov 5 23:22:11.306: INFO: observed event type ADDED Nov 5 23:22:11.306: INFO: observed event type MODIFIED Nov 5 23:22:11.306: INFO: observed event type MODIFIED Nov 5 23:22:11.306: INFO: observed event type MODIFIED Nov 5 23:22:11.306: INFO: observed event type MODIFIED Nov 5 23:22:11.306: INFO: observed event type MODIFIED Nov 5 23:22:11.306: INFO: observed event type MODIFIED Nov 5 23:22:11.306: INFO: observed event type MODIFIED [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:22:11.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7470" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":-1,"completed":10,"skipped":187,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:21:43.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:22:17.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7911" for this suite. • [SLOW TEST:34.250 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when starting a container that exits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":111,"failed":0} S ------------------------------ [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:21:30.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-4156, will wait for the garbage collector to delete the pods Nov 5 23:21:42.468: INFO: Deleting Job.batch foo took: 4.015787ms Nov 5 23:21:42.568: INFO: Terminating Job.batch foo pods took: 100.463076ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:22:18.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-4156" for this suite. • [SLOW TEST:48.502 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":-1,"completed":5,"skipped":86,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:21:55.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 5 23:21:55.851: INFO: Pod name rollover-pod: Found 0 pods out of 1 Nov 5 23:22:00.855: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Nov 5 23:22:00.855: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Nov 5 23:22:02.859: INFO: Creating deployment "test-rollover-deployment" Nov 5 23:22:02.866: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Nov 5 23:22:04.872: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Nov 5 23:22:04.878: INFO: Ensure that both replica sets have 1 created replica Nov 5 23:22:04.884: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Nov 5 23:22:04.893: INFO: Updating deployment test-rollover-deployment Nov 5 23:22:04.893: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Nov 5 23:22:06.898: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Nov 5 23:22:06.903: INFO: Make sure deployment "test-rollover-deployment" is complete Nov 5 23:22:06.907: INFO: all replica sets need to contain the pod-template-hash label Nov 5 23:22:06.908: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751322, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751322, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751324, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751322, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 5 23:22:08.915: INFO: all replica sets need to contain the pod-template-hash label Nov 5 23:22:08.915: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751322, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751322, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751324, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751322, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 5 23:22:10.917: INFO: all replica sets need to contain the pod-template-hash label Nov 5 23:22:10.917: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751322, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751322, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751329, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751322, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 5 23:22:12.915: INFO: all replica sets need to contain the pod-template-hash label Nov 5 23:22:12.915: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751322, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751322, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751329, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751322, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 5 23:22:14.914: INFO: all replica sets need to contain the pod-template-hash label Nov 5 23:22:14.915: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751322, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751322, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751329, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751322, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 5 23:22:16.916: INFO: all replica sets need to contain the pod-template-hash label Nov 5 23:22:16.916: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751322, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751322, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751329, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751322, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 5 23:22:18.914: INFO: all replica sets need to contain the pod-template-hash label Nov 5 23:22:18.914: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751322, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751322, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751329, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751322, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 5 23:22:20.915: INFO: Nov 5 23:22:20.915: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Nov 5 23:22:20.921: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-9777 d0c9b4fa-681c-4ec6-a2f6-a19d3b3a96e2 36860 2 2021-11-05 23:22:02 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-11-05 23:22:04 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-11-05 23:22:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00419be78 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-11-05 23:22:02 +0000 UTC,LastTransitionTime:2021-11-05 23:22:02 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-98c5f4599" has successfully progressed.,LastUpdateTime:2021-11-05 23:22:19 +0000 UTC,LastTransitionTime:2021-11-05 23:22:02 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Nov 5 23:22:20.925: INFO: New ReplicaSet "test-rollover-deployment-98c5f4599" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-98c5f4599 deployment-9777 fa2fbb47-e792-4f01-8f1d-3e7eadeeffb2 36849 2 2021-11-05 23:22:04 +0000 UTC map[name:rollover-pod pod-template-hash:98c5f4599] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment d0c9b4fa-681c-4ec6-a2f6-a19d3b3a96e2 0xc002710420 0xc002710421}] [] [{kube-controller-manager Update apps/v1 2021-11-05 23:22:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d0c9b4fa-681c-4ec6-a2f6-a19d3b3a96e2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 98c5f4599,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:98c5f4599] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002710498 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Nov 5 23:22:20.925: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Nov 5 23:22:20.925: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-9777 22b2db21-35d6-48f3-ade7-c97a2109f798 36859 2 2021-11-05 23:21:55 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment d0c9b4fa-681c-4ec6-a2f6-a19d3b3a96e2 0xc002710217 0xc002710218}] [] [{e2e.test Update apps/v1 2021-11-05 23:21:55 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-11-05 23:22:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d0c9b4fa-681c-4ec6-a2f6-a19d3b3a96e2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0027102b8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Nov 5 23:22:20.925: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-78bc8b888c deployment-9777 c84b7828-deae-4c48-9772-da8bacec6e25 36579 2 2021-11-05 23:22:02 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment d0c9b4fa-681c-4ec6-a2f6-a19d3b3a96e2 0xc002710327 0xc002710328}] [] [{kube-controller-manager Update apps/v1 2021-11-05 23:22:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d0c9b4fa-681c-4ec6-a2f6-a19d3b3a96e2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78bc8b888c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0027103b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Nov 5 23:22:20.928: INFO: Pod "test-rollover-deployment-98c5f4599-jq7rj" is available: &Pod{ObjectMeta:{test-rollover-deployment-98c5f4599-jq7rj test-rollover-deployment-98c5f4599- deployment-9777 a980c09c-7248-427e-9954-16a79a48119c 36663 0 2021-11-05 23:22:04 +0000 UTC map[name:rollover-pod pod-template-hash:98c5f4599] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.221" ], "mac": "ba:07:a6:c3:eb:32", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.221" ], "mac": "ba:07:a6:c3:eb:32", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-rollover-deployment-98c5f4599 fa2fbb47-e792-4f01-8f1d-3e7eadeeffb2 0xc00271098f 0xc0027109a0}] [] [{kube-controller-manager Update v1 2021-11-05 23:22:04 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fa2fbb47-e792-4f01-8f1d-3e7eadeeffb2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-11-05 23:22:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-11-05 23:22:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.221\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-vb6vx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vb6vx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:22:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:22:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:22:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:22:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.221,StartTime:2021-11-05 23:22:04 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-11-05 23:22:08 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:docker://5ce49fd1f58e5d05523c4cdbe356caa288c6ce679bc617ed1375412864602a16,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.221,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:22:20.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9777" for this suite. • [SLOW TEST:25.120 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":-1,"completed":5,"skipped":116,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:22:17.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod with the kernel.shm_rmid_forced sysctl STEP: Watching for error events or started pod STEP: Waiting for pod completion STEP: Checking that the pod succeeded STEP: Getting logs from the pod STEP: Checking that the sysctl is actually updated [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:22:21.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-3628" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":5,"skipped":112,"failed":0} SS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:21:54.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:22:22.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2723" for this suite. • [SLOW TEST:28.067 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":-1,"completed":6,"skipped":119,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:22:18.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 5 23:22:18.980: INFO: The status of Pod busybox-readonly-fs58c5c080-529c-475f-a053-0996693df382 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:22:20.983: INFO: The status of Pod busybox-readonly-fs58c5c080-529c-475f-a053-0996693df382 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:22:22.984: INFO: The status of Pod busybox-readonly-fs58c5c080-529c-475f-a053-0996693df382 is Running (Ready = true) [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:22:22.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7172" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":116,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:22:11.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:77 Nov 5 23:22:11.388: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the sample API server. Nov 5 23:22:11.837: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Nov 5 23:22:13.866: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751331, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751331, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751331, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751331, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 5 23:22:15.871: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751331, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751331, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751331, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751331, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 5 23:22:17.871: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751331, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751331, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751331, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751331, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 5 23:22:19.871: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751331, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751331, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751331, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751331, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 5 23:22:21.872: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751331, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751331, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751331, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751331, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 5 23:22:23.871: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751331, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751331, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751331, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751331, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 5 23:22:26.787: INFO: Waited 912.494915ms for the sample-apiserver to be ready to handle requests. STEP: Read Status for v1alpha1.wardle.example.com STEP: kubectl patch apiservice v1alpha1.wardle.example.com -p '{"spec":{"versionPriority": 400}}' STEP: List APIServices Nov 5 23:22:27.241: INFO: Found v1alpha1.wardle.example.com in APIServiceList [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:68 [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:22:28.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-1444" for this suite. • [SLOW TEST:16.773 seconds] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":11,"skipped":211,"failed":0} SSS ------------------------------ [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:22:22.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Nov 5 23:22:26.849: INFO: Successfully updated pod "adopt-release-9h8h6" STEP: Checking that the Job readopts the Pod Nov 5 23:22:26.849: INFO: Waiting up to 15m0s for pod "adopt-release-9h8h6" in namespace "job-9744" to be "adopted" Nov 5 23:22:26.852: INFO: Pod "adopt-release-9h8h6": Phase="Running", Reason="", readiness=true. Elapsed: 2.57215ms Nov 5 23:22:28.856: INFO: Pod "adopt-release-9h8h6": Phase="Running", Reason="", readiness=true. Elapsed: 2.007339877s Nov 5 23:22:28.856: INFO: Pod "adopt-release-9h8h6" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Nov 5 23:22:29.366: INFO: Successfully updated pod "adopt-release-9h8h6" STEP: Checking that the Job releases the Pod Nov 5 23:22:29.366: INFO: Waiting up to 15m0s for pod "adopt-release-9h8h6" in namespace "job-9744" to be "released" Nov 5 23:22:29.368: INFO: Pod "adopt-release-9h8h6": Phase="Running", Reason="", readiness=true. Elapsed: 1.979368ms Nov 5 23:22:31.372: INFO: Pod "adopt-release-9h8h6": Phase="Running", Reason="", readiness=true. Elapsed: 2.005916957s Nov 5 23:22:31.372: INFO: Pod "adopt-release-9h8h6" satisfied condition "released" [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:22:31.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-9744" for this suite. • [SLOW TEST:9.075 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":7,"skipped":135,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:22:28.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Nov 5 23:22:28.181: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2a4ae16c-4701-43f8-9e6b-165d7f33a465" in namespace "downward-api-1241" to be "Succeeded or Failed" Nov 5 23:22:28.183: INFO: Pod "downwardapi-volume-2a4ae16c-4701-43f8-9e6b-165d7f33a465": Phase="Pending", Reason="", readiness=false. Elapsed: 2.293817ms Nov 5 23:22:30.186: INFO: Pod "downwardapi-volume-2a4ae16c-4701-43f8-9e6b-165d7f33a465": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005577309s Nov 5 23:22:32.191: INFO: Pod "downwardapi-volume-2a4ae16c-4701-43f8-9e6b-165d7f33a465": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010115558s STEP: Saw pod success Nov 5 23:22:32.191: INFO: Pod "downwardapi-volume-2a4ae16c-4701-43f8-9e6b-165d7f33a465" satisfied condition "Succeeded or Failed" Nov 5 23:22:32.193: INFO: Trying to get logs from node node2 pod downwardapi-volume-2a4ae16c-4701-43f8-9e6b-165d7f33a465 container client-container: STEP: delete the pod Nov 5 23:22:32.205: INFO: Waiting for pod downwardapi-volume-2a4ae16c-4701-43f8-9e6b-165d7f33a465 to disappear Nov 5 23:22:32.206: INFO: Pod downwardapi-volume-2a4ae16c-4701-43f8-9e6b-165d7f33a465 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:22:32.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1241" for this suite. • ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:22:22.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:22:33.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3219" for this suite. • [SLOW TEST:11.059 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":-1,"completed":6,"skipped":114,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:22:23.037: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 5 23:22:23.057: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-115 I1105 23:22:23.078732 26 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-115, replica count: 1 I1105 23:22:24.129161 26 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1105 23:22:25.129894 26 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1105 23:22:26.130151 26 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1105 23:22:27.130288 26 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 5 23:22:27.236: INFO: Created: latency-svc-5jq6j Nov 5 23:22:27.240: INFO: Got endpoints: latency-svc-5jq6j [9.819504ms] Nov 5 23:22:27.248: INFO: Created: latency-svc-fsz97 Nov 5 23:22:27.251: INFO: Got endpoints: latency-svc-fsz97 [10.305114ms] Nov 5 23:22:27.252: INFO: Created: latency-svc-qxxfg Nov 5 23:22:27.254: INFO: Got endpoints: latency-svc-qxxfg [13.516836ms] Nov 5 23:22:27.255: INFO: Created: latency-svc-8v8g8 Nov 5 23:22:27.257: INFO: Got endpoints: latency-svc-8v8g8 [16.666713ms] Nov 5 23:22:27.257: INFO: Created: latency-svc-h8ss6 Nov 5 23:22:27.260: INFO: Got endpoints: latency-svc-h8ss6 [19.068065ms] Nov 5 23:22:27.260: INFO: Created: latency-svc-z7g97 Nov 5 23:22:27.262: INFO: Got endpoints: latency-svc-z7g97 [21.328863ms] Nov 5 23:22:27.263: INFO: Created: latency-svc-wf62l Nov 5 23:22:27.266: INFO: Got endpoints: latency-svc-wf62l [25.022823ms] Nov 5 23:22:27.266: INFO: Created: latency-svc-ng4m4 Nov 5 23:22:27.269: INFO: Got endpoints: latency-svc-ng4m4 [27.876801ms] Nov 5 23:22:27.269: INFO: Created: latency-svc-sr6ww Nov 5 23:22:27.271: INFO: Got endpoints: latency-svc-sr6ww [30.62041ms] Nov 5 23:22:27.272: INFO: Created: latency-svc-475pn Nov 5 23:22:27.274: INFO: Got endpoints: latency-svc-475pn [33.863537ms] Nov 5 23:22:27.275: INFO: Created: latency-svc-vh8xq Nov 5 23:22:27.278: INFO: Created: latency-svc-jllrr Nov 5 23:22:27.278: INFO: Got endpoints: latency-svc-vh8xq [37.950972ms] Nov 5 23:22:27.280: INFO: Got endpoints: latency-svc-jllrr [38.933666ms] Nov 5 23:22:27.281: INFO: Created: latency-svc-4xjts Nov 5 23:22:27.284: INFO: Got endpoints: latency-svc-4xjts [42.890633ms] Nov 5 23:22:27.284: INFO: Created: latency-svc-tkfbb Nov 5 23:22:27.286: INFO: Got endpoints: latency-svc-tkfbb [45.535458ms] Nov 5 23:22:27.287: INFO: Created: latency-svc-8g2w2 Nov 5 23:22:27.289: INFO: Got endpoints: latency-svc-8g2w2 [48.669192ms] Nov 5 23:22:27.289: INFO: Created: latency-svc-tb5jg Nov 5 23:22:27.291: INFO: Got endpoints: latency-svc-tb5jg [50.538657ms] Nov 5 23:22:27.293: INFO: Created: latency-svc-dkk67 Nov 5 23:22:27.295: INFO: Got endpoints: latency-svc-dkk67 [44.345981ms] Nov 5 23:22:27.296: INFO: Created: latency-svc-6zgfb Nov 5 23:22:27.299: INFO: Got endpoints: latency-svc-6zgfb [45.01265ms] Nov 5 23:22:27.301: INFO: Created: latency-svc-g8cv5 Nov 5 23:22:27.303: INFO: Got endpoints: latency-svc-g8cv5 [45.285064ms] Nov 5 23:22:27.303: INFO: Created: latency-svc-9nbdz Nov 5 23:22:27.305: INFO: Got endpoints: latency-svc-9nbdz [45.664447ms] Nov 5 23:22:27.306: INFO: Created: latency-svc-2h4m2 Nov 5 23:22:27.308: INFO: Got endpoints: latency-svc-2h4m2 [46.153863ms] Nov 5 23:22:27.309: INFO: Created: latency-svc-hhh8t Nov 5 23:22:27.312: INFO: Got endpoints: latency-svc-hhh8t [45.801029ms] Nov 5 23:22:27.312: INFO: Created: latency-svc-vcnlh Nov 5 23:22:27.315: INFO: Got endpoints: latency-svc-vcnlh [45.974823ms] Nov 5 23:22:27.316: INFO: Created: latency-svc-46qvr Nov 5 23:22:27.319: INFO: Got endpoints: latency-svc-46qvr [47.228252ms] Nov 5 23:22:27.319: INFO: Created: latency-svc-pm749 Nov 5 23:22:27.321: INFO: Got endpoints: latency-svc-pm749 [47.097915ms] Nov 5 23:22:27.322: INFO: Created: latency-svc-kjzg8 Nov 5 23:22:27.324: INFO: Got endpoints: latency-svc-kjzg8 [45.835888ms] Nov 5 23:22:27.325: INFO: Created: latency-svc-6zthw Nov 5 23:22:27.328: INFO: Got endpoints: latency-svc-6zthw [48.142774ms] Nov 5 23:22:27.328: INFO: Created: latency-svc-wsz5l Nov 5 23:22:27.330: INFO: Got endpoints: latency-svc-wsz5l [46.323495ms] Nov 5 23:22:27.331: INFO: Created: latency-svc-4v7gr Nov 5 23:22:27.333: INFO: Got endpoints: latency-svc-4v7gr [47.371614ms] Nov 5 23:22:27.334: INFO: Created: latency-svc-g5q2n Nov 5 23:22:27.337: INFO: Got endpoints: latency-svc-g5q2n [47.358747ms] Nov 5 23:22:27.338: INFO: Created: latency-svc-wtj7d Nov 5 23:22:27.339: INFO: Got endpoints: latency-svc-wtj7d [48.027003ms] Nov 5 23:22:27.341: INFO: Created: latency-svc-zl924 Nov 5 23:22:27.343: INFO: Created: latency-svc-gw9zx Nov 5 23:22:27.349: INFO: Created: latency-svc-6nwqx Nov 5 23:22:27.351: INFO: Created: latency-svc-z94rr Nov 5 23:22:27.354: INFO: Created: latency-svc-hh88p Nov 5 23:22:27.356: INFO: Created: latency-svc-pr9h2 Nov 5 23:22:27.362: INFO: Created: latency-svc-8sk2f Nov 5 23:22:27.365: INFO: Created: latency-svc-bqpjb Nov 5 23:22:27.375: INFO: Created: latency-svc-9vqls Nov 5 23:22:27.376: INFO: Created: latency-svc-lw7mx Nov 5 23:22:27.379: INFO: Created: latency-svc-tsrz2 Nov 5 23:22:27.385: INFO: Created: latency-svc-2mhkw Nov 5 23:22:27.389: INFO: Created: latency-svc-2b5rs Nov 5 23:22:27.389: INFO: Got endpoints: latency-svc-zl924 [94.397021ms] Nov 5 23:22:27.391: INFO: Created: latency-svc-n5jh2 Nov 5 23:22:27.394: INFO: Created: latency-svc-fqmjx Nov 5 23:22:27.399: INFO: Created: latency-svc-gk265 Nov 5 23:22:27.438: INFO: Got endpoints: latency-svc-gw9zx [139.402289ms] Nov 5 23:22:27.443: INFO: Created: latency-svc-6nlqq Nov 5 23:22:27.489: INFO: Got endpoints: latency-svc-6nwqx [185.86481ms] Nov 5 23:22:27.496: INFO: Created: latency-svc-ct8bf Nov 5 23:22:27.544: INFO: Got endpoints: latency-svc-z94rr [238.557286ms] Nov 5 23:22:27.549: INFO: Created: latency-svc-9x6ll Nov 5 23:22:27.589: INFO: Got endpoints: latency-svc-hh88p [280.299676ms] Nov 5 23:22:27.594: INFO: Created: latency-svc-5k8hq Nov 5 23:22:27.639: INFO: Got endpoints: latency-svc-pr9h2 [327.517736ms] Nov 5 23:22:27.645: INFO: Created: latency-svc-65q88 Nov 5 23:22:27.693: INFO: Got endpoints: latency-svc-8sk2f [378.217787ms] Nov 5 23:22:27.698: INFO: Created: latency-svc-hzn54 Nov 5 23:22:27.740: INFO: Got endpoints: latency-svc-bqpjb [421.056922ms] Nov 5 23:22:27.744: INFO: Created: latency-svc-sm4b9 Nov 5 23:22:27.790: INFO: Got endpoints: latency-svc-9vqls [468.350233ms] Nov 5 23:22:27.795: INFO: Created: latency-svc-v9q9f Nov 5 23:22:27.839: INFO: Got endpoints: latency-svc-lw7mx [514.571678ms] Nov 5 23:22:27.844: INFO: Created: latency-svc-txccs Nov 5 23:22:27.889: INFO: Got endpoints: latency-svc-tsrz2 [561.702525ms] Nov 5 23:22:27.895: INFO: Created: latency-svc-xg7c7 Nov 5 23:22:27.940: INFO: Got endpoints: latency-svc-2mhkw [610.137912ms] Nov 5 23:22:27.946: INFO: Created: latency-svc-qtn2f Nov 5 23:22:28.039: INFO: Got endpoints: latency-svc-2b5rs [705.833461ms] Nov 5 23:22:28.045: INFO: Created: latency-svc-qb6s5 Nov 5 23:22:28.089: INFO: Got endpoints: latency-svc-n5jh2 [752.618665ms] Nov 5 23:22:28.095: INFO: Created: latency-svc-ztqq5 Nov 5 23:22:28.139: INFO: Got endpoints: latency-svc-fqmjx [799.803325ms] Nov 5 23:22:28.152: INFO: Created: latency-svc-5gkvv Nov 5 23:22:28.189: INFO: Got endpoints: latency-svc-gk265 [799.291565ms] Nov 5 23:22:28.195: INFO: Created: latency-svc-wprqn Nov 5 23:22:28.239: INFO: Got endpoints: latency-svc-6nlqq [800.885189ms] Nov 5 23:22:28.245: INFO: Created: latency-svc-qj88z Nov 5 23:22:28.291: INFO: Got endpoints: latency-svc-ct8bf [802.025298ms] Nov 5 23:22:28.296: INFO: Created: latency-svc-s2smc Nov 5 23:22:28.339: INFO: Got endpoints: latency-svc-9x6ll [794.944479ms] Nov 5 23:22:28.344: INFO: Created: latency-svc-sqpz9 Nov 5 23:22:28.389: INFO: Got endpoints: latency-svc-5k8hq [799.990342ms] Nov 5 23:22:28.395: INFO: Created: latency-svc-jtl85 Nov 5 23:22:28.440: INFO: Got endpoints: latency-svc-65q88 [800.948634ms] Nov 5 23:22:28.447: INFO: Created: latency-svc-hk2jk Nov 5 23:22:28.490: INFO: Got endpoints: latency-svc-hzn54 [797.337154ms] Nov 5 23:22:28.496: INFO: Created: latency-svc-bhdbd Nov 5 23:22:28.540: INFO: Got endpoints: latency-svc-sm4b9 [799.809425ms] Nov 5 23:22:28.545: INFO: Created: latency-svc-kv926 Nov 5 23:22:28.589: INFO: Got endpoints: latency-svc-v9q9f [799.006643ms] Nov 5 23:22:28.596: INFO: Created: latency-svc-p5n4j Nov 5 23:22:28.639: INFO: Got endpoints: latency-svc-txccs [800.331277ms] Nov 5 23:22:28.644: INFO: Created: latency-svc-prngb Nov 5 23:22:28.690: INFO: Got endpoints: latency-svc-xg7c7 [800.441552ms] Nov 5 23:22:28.695: INFO: Created: latency-svc-fm2zj Nov 5 23:22:28.740: INFO: Got endpoints: latency-svc-qtn2f [800.044045ms] Nov 5 23:22:28.746: INFO: Created: latency-svc-6g4qj Nov 5 23:22:28.790: INFO: Got endpoints: latency-svc-qb6s5 [750.56731ms] Nov 5 23:22:28.795: INFO: Created: latency-svc-4brwr Nov 5 23:22:28.839: INFO: Got endpoints: latency-svc-ztqq5 [749.698838ms] Nov 5 23:22:28.844: INFO: Created: latency-svc-928sp Nov 5 23:22:28.890: INFO: Got endpoints: latency-svc-5gkvv [750.962887ms] Nov 5 23:22:28.896: INFO: Created: latency-svc-2c2vr Nov 5 23:22:28.940: INFO: Got endpoints: latency-svc-wprqn [751.342314ms] Nov 5 23:22:28.946: INFO: Created: latency-svc-lc2px Nov 5 23:22:28.990: INFO: Got endpoints: latency-svc-qj88z [750.434691ms] Nov 5 23:22:28.996: INFO: Created: latency-svc-hzqzl Nov 5 23:22:29.040: INFO: Got endpoints: latency-svc-s2smc [749.043383ms] Nov 5 23:22:29.046: INFO: Created: latency-svc-6mlt5 Nov 5 23:22:29.090: INFO: Got endpoints: latency-svc-sqpz9 [750.825978ms] Nov 5 23:22:29.095: INFO: Created: latency-svc-9ftvq Nov 5 23:22:29.140: INFO: Got endpoints: latency-svc-jtl85 [751.684058ms] Nov 5 23:22:29.146: INFO: Created: latency-svc-4rp9w Nov 5 23:22:29.190: INFO: Got endpoints: latency-svc-hk2jk [749.540261ms] Nov 5 23:22:29.196: INFO: Created: latency-svc-lnlf9 Nov 5 23:22:29.240: INFO: Got endpoints: latency-svc-bhdbd [749.784231ms] Nov 5 23:22:29.245: INFO: Created: latency-svc-5hg46 Nov 5 23:22:29.290: INFO: Got endpoints: latency-svc-kv926 [750.456569ms] Nov 5 23:22:29.295: INFO: Created: latency-svc-ftlg4 Nov 5 23:22:29.341: INFO: Got endpoints: latency-svc-p5n4j [751.893572ms] Nov 5 23:22:29.347: INFO: Created: latency-svc-r6dl9 Nov 5 23:22:29.390: INFO: Got endpoints: latency-svc-prngb [750.697321ms] Nov 5 23:22:29.396: INFO: Created: latency-svc-ml6ks Nov 5 23:22:29.440: INFO: Got endpoints: latency-svc-fm2zj [749.703114ms] Nov 5 23:22:29.446: INFO: Created: latency-svc-7kzc2 Nov 5 23:22:29.490: INFO: Got endpoints: latency-svc-6g4qj [749.956267ms] Nov 5 23:22:29.497: INFO: Created: latency-svc-s6kcd Nov 5 23:22:29.539: INFO: Got endpoints: latency-svc-4brwr [748.961284ms] Nov 5 23:22:29.544: INFO: Created: latency-svc-j9zrv Nov 5 23:22:29.590: INFO: Got endpoints: latency-svc-928sp [751.016566ms] Nov 5 23:22:29.595: INFO: Created: latency-svc-wwz7h Nov 5 23:22:29.640: INFO: Got endpoints: latency-svc-2c2vr [749.225318ms] Nov 5 23:22:29.645: INFO: Created: latency-svc-wt65k Nov 5 23:22:29.690: INFO: Got endpoints: latency-svc-lc2px [749.547374ms] Nov 5 23:22:29.695: INFO: Created: latency-svc-5jhjd Nov 5 23:22:29.740: INFO: Got endpoints: latency-svc-hzqzl [750.206906ms] Nov 5 23:22:29.746: INFO: Created: latency-svc-pglkg Nov 5 23:22:29.790: INFO: Got endpoints: latency-svc-6mlt5 [749.996188ms] Nov 5 23:22:29.796: INFO: Created: latency-svc-bmh4z Nov 5 23:22:29.840: INFO: Got endpoints: latency-svc-9ftvq [749.697287ms] Nov 5 23:22:29.845: INFO: Created: latency-svc-45tvr Nov 5 23:22:29.890: INFO: Got endpoints: latency-svc-4rp9w [749.428316ms] Nov 5 23:22:29.895: INFO: Created: latency-svc-86jkj Nov 5 23:22:29.939: INFO: Got endpoints: latency-svc-lnlf9 [749.396559ms] Nov 5 23:22:29.947: INFO: Created: latency-svc-q92bg Nov 5 23:22:29.989: INFO: Got endpoints: latency-svc-5hg46 [749.308111ms] Nov 5 23:22:29.996: INFO: Created: latency-svc-wm65n Nov 5 23:22:30.040: INFO: Got endpoints: latency-svc-ftlg4 [749.767625ms] Nov 5 23:22:30.045: INFO: Created: latency-svc-ckcds Nov 5 23:22:30.090: INFO: Got endpoints: latency-svc-r6dl9 [749.197592ms] Nov 5 23:22:30.096: INFO: Created: latency-svc-vj5pv Nov 5 23:22:30.140: INFO: Got endpoints: latency-svc-ml6ks [749.722586ms] Nov 5 23:22:30.147: INFO: Created: latency-svc-b25d9 Nov 5 23:22:30.190: INFO: Got endpoints: latency-svc-7kzc2 [750.289625ms] Nov 5 23:22:30.197: INFO: Created: latency-svc-96d6p Nov 5 23:22:30.240: INFO: Got endpoints: latency-svc-s6kcd [749.930673ms] Nov 5 23:22:30.246: INFO: Created: latency-svc-ddvxg Nov 5 23:22:30.290: INFO: Got endpoints: latency-svc-j9zrv [750.751746ms] Nov 5 23:22:30.294: INFO: Created: latency-svc-dbmds Nov 5 23:22:30.341: INFO: Got endpoints: latency-svc-wwz7h [750.736413ms] Nov 5 23:22:30.346: INFO: Created: latency-svc-qcbh4 Nov 5 23:22:30.389: INFO: Got endpoints: latency-svc-wt65k [749.628426ms] Nov 5 23:22:30.395: INFO: Created: latency-svc-m7b4p Nov 5 23:22:30.441: INFO: Got endpoints: latency-svc-5jhjd [750.965317ms] Nov 5 23:22:30.446: INFO: Created: latency-svc-2q4qj Nov 5 23:22:30.490: INFO: Got endpoints: latency-svc-pglkg [750.236462ms] Nov 5 23:22:30.496: INFO: Created: latency-svc-685bz Nov 5 23:22:30.539: INFO: Got endpoints: latency-svc-bmh4z [749.364146ms] Nov 5 23:22:30.545: INFO: Created: latency-svc-72fc4 Nov 5 23:22:30.590: INFO: Got endpoints: latency-svc-45tvr [749.775911ms] Nov 5 23:22:30.594: INFO: Created: latency-svc-rb8sf Nov 5 23:22:30.640: INFO: Got endpoints: latency-svc-86jkj [750.374595ms] Nov 5 23:22:30.646: INFO: Created: latency-svc-zmpnc Nov 5 23:22:30.690: INFO: Got endpoints: latency-svc-q92bg [751.199642ms] Nov 5 23:22:30.696: INFO: Created: latency-svc-fmfp7 Nov 5 23:22:30.740: INFO: Got endpoints: latency-svc-wm65n [750.757667ms] Nov 5 23:22:30.746: INFO: Created: latency-svc-mf6hd Nov 5 23:22:30.790: INFO: Got endpoints: latency-svc-ckcds [750.110441ms] Nov 5 23:22:30.797: INFO: Created: latency-svc-98k6b Nov 5 23:22:30.840: INFO: Got endpoints: latency-svc-vj5pv [749.377662ms] Nov 5 23:22:30.845: INFO: Created: latency-svc-xf6xf Nov 5 23:22:30.889: INFO: Got endpoints: latency-svc-b25d9 [748.639069ms] Nov 5 23:22:30.895: INFO: Created: latency-svc-vd68k Nov 5 23:22:30.939: INFO: Got endpoints: latency-svc-96d6p [748.530778ms] Nov 5 23:22:30.944: INFO: Created: latency-svc-p8wtd Nov 5 23:22:30.990: INFO: Got endpoints: latency-svc-ddvxg [749.703263ms] Nov 5 23:22:30.996: INFO: Created: latency-svc-tnzlh Nov 5 23:22:31.041: INFO: Got endpoints: latency-svc-dbmds [750.889896ms] Nov 5 23:22:31.046: INFO: Created: latency-svc-lb8v4 Nov 5 23:22:31.090: INFO: Got endpoints: latency-svc-qcbh4 [749.204672ms] Nov 5 23:22:31.095: INFO: Created: latency-svc-zb68w Nov 5 23:22:31.139: INFO: Got endpoints: latency-svc-m7b4p [749.897434ms] Nov 5 23:22:31.145: INFO: Created: latency-svc-lmx7n Nov 5 23:22:31.190: INFO: Got endpoints: latency-svc-2q4qj [749.010611ms] Nov 5 23:22:31.196: INFO: Created: latency-svc-d99dg Nov 5 23:22:31.239: INFO: Got endpoints: latency-svc-685bz [748.314315ms] Nov 5 23:22:31.245: INFO: Created: latency-svc-b2mdj Nov 5 23:22:31.290: INFO: Got endpoints: latency-svc-72fc4 [750.692748ms] Nov 5 23:22:31.296: INFO: Created: latency-svc-mdssl Nov 5 23:22:31.340: INFO: Got endpoints: latency-svc-rb8sf [750.253435ms] Nov 5 23:22:31.345: INFO: Created: latency-svc-khhb7 Nov 5 23:22:31.389: INFO: Got endpoints: latency-svc-zmpnc [748.763969ms] Nov 5 23:22:31.395: INFO: Created: latency-svc-tzxvb Nov 5 23:22:31.440: INFO: Got endpoints: latency-svc-fmfp7 [749.089525ms] Nov 5 23:22:31.445: INFO: Created: latency-svc-llvhs Nov 5 23:22:31.489: INFO: Got endpoints: latency-svc-mf6hd [748.938129ms] Nov 5 23:22:31.495: INFO: Created: latency-svc-sfhs4 Nov 5 23:22:31.540: INFO: Got endpoints: latency-svc-98k6b [749.774708ms] Nov 5 23:22:31.545: INFO: Created: latency-svc-hgnnl Nov 5 23:22:31.589: INFO: Got endpoints: latency-svc-xf6xf [749.245885ms] Nov 5 23:22:31.595: INFO: Created: latency-svc-5zrc2 Nov 5 23:22:31.639: INFO: Got endpoints: latency-svc-vd68k [750.821181ms] Nov 5 23:22:31.644: INFO: Created: latency-svc-kr4cg Nov 5 23:22:31.690: INFO: Got endpoints: latency-svc-p8wtd [751.164137ms] Nov 5 23:22:31.695: INFO: Created: latency-svc-9vmtr Nov 5 23:22:31.740: INFO: Got endpoints: latency-svc-tnzlh [750.324ms] Nov 5 23:22:31.747: INFO: Created: latency-svc-vkp6c Nov 5 23:22:31.789: INFO: Got endpoints: latency-svc-lb8v4 [748.558455ms] Nov 5 23:22:31.794: INFO: Created: latency-svc-s6zgp Nov 5 23:22:31.840: INFO: Got endpoints: latency-svc-zb68w [750.140667ms] Nov 5 23:22:31.846: INFO: Created: latency-svc-5gr8g Nov 5 23:22:31.890: INFO: Got endpoints: latency-svc-lmx7n [750.500547ms] Nov 5 23:22:31.896: INFO: Created: latency-svc-495n8 Nov 5 23:22:31.939: INFO: Got endpoints: latency-svc-d99dg [749.313423ms] Nov 5 23:22:31.944: INFO: Created: latency-svc-8mhbw Nov 5 23:22:31.989: INFO: Got endpoints: latency-svc-b2mdj [750.689812ms] Nov 5 23:22:31.994: INFO: Created: latency-svc-5822l Nov 5 23:22:32.040: INFO: Got endpoints: latency-svc-mdssl [749.867917ms] Nov 5 23:22:32.046: INFO: Created: latency-svc-jbkv9 Nov 5 23:22:32.089: INFO: Got endpoints: latency-svc-khhb7 [749.414778ms] Nov 5 23:22:32.094: INFO: Created: latency-svc-kk29d Nov 5 23:22:32.139: INFO: Got endpoints: latency-svc-tzxvb [750.119824ms] Nov 5 23:22:32.145: INFO: Created: latency-svc-xlgnt Nov 5 23:22:32.190: INFO: Got endpoints: latency-svc-llvhs [750.780839ms] Nov 5 23:22:32.196: INFO: Created: latency-svc-4n699 Nov 5 23:22:32.239: INFO: Got endpoints: latency-svc-sfhs4 [749.930035ms] Nov 5 23:22:32.244: INFO: Created: latency-svc-zm8xr Nov 5 23:22:32.289: INFO: Got endpoints: latency-svc-hgnnl [748.956737ms] Nov 5 23:22:32.295: INFO: Created: latency-svc-kd4vn Nov 5 23:22:32.341: INFO: Got endpoints: latency-svc-5zrc2 [751.748759ms] Nov 5 23:22:32.347: INFO: Created: latency-svc-f986v Nov 5 23:22:32.389: INFO: Got endpoints: latency-svc-kr4cg [749.705072ms] Nov 5 23:22:32.394: INFO: Created: latency-svc-hgdsl Nov 5 23:22:32.439: INFO: Got endpoints: latency-svc-9vmtr [749.26701ms] Nov 5 23:22:32.447: INFO: Created: latency-svc-k6zbg Nov 5 23:22:32.489: INFO: Got endpoints: latency-svc-vkp6c [748.919844ms] Nov 5 23:22:32.497: INFO: Created: latency-svc-ngqpn Nov 5 23:22:32.541: INFO: Got endpoints: latency-svc-s6zgp [751.392575ms] Nov 5 23:22:32.547: INFO: Created: latency-svc-77fpw Nov 5 23:22:32.590: INFO: Got endpoints: latency-svc-5gr8g [749.236881ms] Nov 5 23:22:32.595: INFO: Created: latency-svc-pndtl Nov 5 23:22:32.641: INFO: Got endpoints: latency-svc-495n8 [750.908567ms] Nov 5 23:22:32.647: INFO: Created: latency-svc-cst8d Nov 5 23:22:32.689: INFO: Got endpoints: latency-svc-8mhbw [749.567997ms] Nov 5 23:22:32.695: INFO: Created: latency-svc-4t6qv Nov 5 23:22:32.740: INFO: Got endpoints: latency-svc-5822l [750.608168ms] Nov 5 23:22:32.745: INFO: Created: latency-svc-97pbw Nov 5 23:22:32.791: INFO: Got endpoints: latency-svc-jbkv9 [750.67744ms] Nov 5 23:22:32.797: INFO: Created: latency-svc-wmdrf Nov 5 23:22:32.839: INFO: Got endpoints: latency-svc-kk29d [749.871135ms] Nov 5 23:22:32.844: INFO: Created: latency-svc-v485c Nov 5 23:22:32.890: INFO: Got endpoints: latency-svc-xlgnt [750.494428ms] Nov 5 23:22:32.896: INFO: Created: latency-svc-gqmbh Nov 5 23:22:32.939: INFO: Got endpoints: latency-svc-4n699 [748.645513ms] Nov 5 23:22:32.946: INFO: Created: latency-svc-fwzvb Nov 5 23:22:32.989: INFO: Got endpoints: latency-svc-zm8xr [750.065916ms] Nov 5 23:22:32.994: INFO: Created: latency-svc-rsgst Nov 5 23:22:33.040: INFO: Got endpoints: latency-svc-kd4vn [751.212634ms] Nov 5 23:22:33.045: INFO: Created: latency-svc-t76rb Nov 5 23:22:33.089: INFO: Got endpoints: latency-svc-f986v [748.75891ms] Nov 5 23:22:33.095: INFO: Created: latency-svc-4q4th Nov 5 23:22:33.138: INFO: Got endpoints: latency-svc-hgdsl [749.057125ms] Nov 5 23:22:33.144: INFO: Created: latency-svc-587lj Nov 5 23:22:33.188: INFO: Got endpoints: latency-svc-k6zbg [749.116236ms] Nov 5 23:22:33.194: INFO: Created: latency-svc-9klg5 Nov 5 23:22:33.239: INFO: Got endpoints: latency-svc-ngqpn [749.550785ms] Nov 5 23:22:33.244: INFO: Created: latency-svc-8pf5p Nov 5 23:22:33.291: INFO: Got endpoints: latency-svc-77fpw [749.812801ms] Nov 5 23:22:33.296: INFO: Created: latency-svc-7vsvz Nov 5 23:22:33.340: INFO: Got endpoints: latency-svc-pndtl [749.720175ms] Nov 5 23:22:33.345: INFO: Created: latency-svc-k7f6r Nov 5 23:22:33.390: INFO: Got endpoints: latency-svc-cst8d [748.99718ms] Nov 5 23:22:33.396: INFO: Created: latency-svc-fc5vt Nov 5 23:22:33.441: INFO: Got endpoints: latency-svc-4t6qv [751.962221ms] Nov 5 23:22:33.446: INFO: Created: latency-svc-7dfmr Nov 5 23:22:33.540: INFO: Got endpoints: latency-svc-97pbw [799.780589ms] Nov 5 23:22:33.545: INFO: Created: latency-svc-2w8sr Nov 5 23:22:33.590: INFO: Got endpoints: latency-svc-wmdrf [799.064541ms] Nov 5 23:22:33.595: INFO: Created: latency-svc-qdc8b Nov 5 23:22:33.640: INFO: Got endpoints: latency-svc-v485c [800.401854ms] Nov 5 23:22:33.645: INFO: Created: latency-svc-64pcr Nov 5 23:22:33.690: INFO: Got endpoints: latency-svc-gqmbh [799.661306ms] Nov 5 23:22:33.695: INFO: Created: latency-svc-jlqjv Nov 5 23:22:33.739: INFO: Got endpoints: latency-svc-fwzvb [800.333689ms] Nov 5 23:22:33.745: INFO: Created: latency-svc-lgvtt Nov 5 23:22:33.789: INFO: Got endpoints: latency-svc-rsgst [799.573487ms] Nov 5 23:22:33.794: INFO: Created: latency-svc-s4pbv Nov 5 23:22:33.840: INFO: Got endpoints: latency-svc-t76rb [799.670094ms] Nov 5 23:22:33.846: INFO: Created: latency-svc-zk2c8 Nov 5 23:22:33.889: INFO: Got endpoints: latency-svc-4q4th [799.503078ms] Nov 5 23:22:33.896: INFO: Created: latency-svc-7mjb9 Nov 5 23:22:33.941: INFO: Got endpoints: latency-svc-587lj [802.345769ms] Nov 5 23:22:33.947: INFO: Created: latency-svc-xxr24 Nov 5 23:22:33.989: INFO: Got endpoints: latency-svc-9klg5 [800.736698ms] Nov 5 23:22:33.996: INFO: Created: latency-svc-8r2wr Nov 5 23:22:34.040: INFO: Got endpoints: latency-svc-8pf5p [801.324684ms] Nov 5 23:22:34.046: INFO: Created: latency-svc-pg4n4 Nov 5 23:22:34.090: INFO: Got endpoints: latency-svc-7vsvz [799.258454ms] Nov 5 23:22:34.095: INFO: Created: latency-svc-9jqd5 Nov 5 23:22:34.141: INFO: Got endpoints: latency-svc-k7f6r [801.16638ms] Nov 5 23:22:34.147: INFO: Created: latency-svc-42js6 Nov 5 23:22:34.189: INFO: Got endpoints: latency-svc-fc5vt [799.679134ms] Nov 5 23:22:34.196: INFO: Created: latency-svc-fj69v Nov 5 23:22:34.240: INFO: Got endpoints: latency-svc-7dfmr [799.251353ms] Nov 5 23:22:34.246: INFO: Created: latency-svc-dxxdd Nov 5 23:22:34.339: INFO: Got endpoints: latency-svc-2w8sr [799.281937ms] Nov 5 23:22:34.345: INFO: Created: latency-svc-c78kn Nov 5 23:22:34.389: INFO: Got endpoints: latency-svc-qdc8b [799.197785ms] Nov 5 23:22:34.394: INFO: Created: latency-svc-s4psq Nov 5 23:22:34.440: INFO: Got endpoints: latency-svc-64pcr [800.257794ms] Nov 5 23:22:34.447: INFO: Created: latency-svc-ckm4n Nov 5 23:22:34.490: INFO: Got endpoints: latency-svc-jlqjv [799.898506ms] Nov 5 23:22:34.495: INFO: Created: latency-svc-2kj5r Nov 5 23:22:34.540: INFO: Got endpoints: latency-svc-lgvtt [800.290468ms] Nov 5 23:22:34.546: INFO: Created: latency-svc-f42x8 Nov 5 23:22:34.590: INFO: Got endpoints: latency-svc-s4pbv [800.546121ms] Nov 5 23:22:34.595: INFO: Created: latency-svc-888kb Nov 5 23:22:34.639: INFO: Got endpoints: latency-svc-zk2c8 [799.020853ms] Nov 5 23:22:34.645: INFO: Created: latency-svc-h7mf4 Nov 5 23:22:34.689: INFO: Got endpoints: latency-svc-7mjb9 [800.396215ms] Nov 5 23:22:34.695: INFO: Created: latency-svc-6qn86 Nov 5 23:22:34.740: INFO: Got endpoints: latency-svc-xxr24 [798.776152ms] Nov 5 23:22:34.745: INFO: Created: latency-svc-cbhd8 Nov 5 23:22:34.790: INFO: Got endpoints: latency-svc-8r2wr [800.513787ms] Nov 5 23:22:34.795: INFO: Created: latency-svc-z428m Nov 5 23:22:34.839: INFO: Got endpoints: latency-svc-pg4n4 [799.202737ms] Nov 5 23:22:34.845: INFO: Created: latency-svc-m9m56 Nov 5 23:22:34.890: INFO: Got endpoints: latency-svc-9jqd5 [799.867066ms] Nov 5 23:22:34.897: INFO: Created: latency-svc-rstln Nov 5 23:22:34.941: INFO: Got endpoints: latency-svc-42js6 [799.677068ms] Nov 5 23:22:34.946: INFO: Created: latency-svc-w6pv6 Nov 5 23:22:34.989: INFO: Got endpoints: latency-svc-fj69v [799.88523ms] Nov 5 23:22:34.995: INFO: Created: latency-svc-qb4wv Nov 5 23:22:35.040: INFO: Got endpoints: latency-svc-dxxdd [799.422582ms] Nov 5 23:22:35.046: INFO: Created: latency-svc-77jq6 Nov 5 23:22:35.089: INFO: Got endpoints: latency-svc-c78kn [750.212586ms] Nov 5 23:22:35.094: INFO: Created: latency-svc-9ps26 Nov 5 23:22:35.140: INFO: Got endpoints: latency-svc-s4psq [750.971492ms] Nov 5 23:22:35.146: INFO: Created: latency-svc-9vzmp Nov 5 23:22:35.191: INFO: Got endpoints: latency-svc-ckm4n [750.610464ms] Nov 5 23:22:35.196: INFO: Created: latency-svc-h97rv Nov 5 23:22:35.240: INFO: Got endpoints: latency-svc-2kj5r [750.20842ms] Nov 5 23:22:35.245: INFO: Created: latency-svc-mdvmq Nov 5 23:22:35.290: INFO: Got endpoints: latency-svc-f42x8 [749.760329ms] Nov 5 23:22:35.340: INFO: Got endpoints: latency-svc-888kb [750.684282ms] Nov 5 23:22:35.390: INFO: Got endpoints: latency-svc-h7mf4 [751.074164ms] Nov 5 23:22:35.439: INFO: Got endpoints: latency-svc-6qn86 [749.937642ms] Nov 5 23:22:35.490: INFO: Got endpoints: latency-svc-cbhd8 [749.922018ms] Nov 5 23:22:35.540: INFO: Got endpoints: latency-svc-z428m [750.128215ms] Nov 5 23:22:35.590: INFO: Got endpoints: latency-svc-m9m56 [750.102453ms] Nov 5 23:22:35.639: INFO: Got endpoints: latency-svc-rstln [749.568692ms] Nov 5 23:22:35.690: INFO: Got endpoints: latency-svc-w6pv6 [748.960574ms] Nov 5 23:22:35.739: INFO: Got endpoints: latency-svc-qb4wv [749.588217ms] Nov 5 23:22:35.790: INFO: Got endpoints: latency-svc-77jq6 [750.161852ms] Nov 5 23:22:35.840: INFO: Got endpoints: latency-svc-9ps26 [750.733182ms] Nov 5 23:22:35.889: INFO: Got endpoints: latency-svc-9vzmp [749.054432ms] Nov 5 23:22:35.940: INFO: Got endpoints: latency-svc-h97rv [749.737741ms] Nov 5 23:22:35.990: INFO: Got endpoints: latency-svc-mdvmq [750.382167ms] Nov 5 23:22:35.990: INFO: Latencies: [10.305114ms 13.516836ms 16.666713ms 19.068065ms 21.328863ms 25.022823ms 27.876801ms 30.62041ms 33.863537ms 37.950972ms 38.933666ms 42.890633ms 44.345981ms 45.01265ms 45.285064ms 45.535458ms 45.664447ms 45.801029ms 45.835888ms 45.974823ms 46.153863ms 46.323495ms 47.097915ms 47.228252ms 47.358747ms 47.371614ms 48.027003ms 48.142774ms 48.669192ms 50.538657ms 94.397021ms 139.402289ms 185.86481ms 238.557286ms 280.299676ms 327.517736ms 378.217787ms 421.056922ms 468.350233ms 514.571678ms 561.702525ms 610.137912ms 705.833461ms 748.314315ms 748.530778ms 748.558455ms 748.639069ms 748.645513ms 748.75891ms 748.763969ms 748.919844ms 748.938129ms 748.956737ms 748.960574ms 748.961284ms 748.99718ms 749.010611ms 749.043383ms 749.054432ms 749.057125ms 749.089525ms 749.116236ms 749.197592ms 749.204672ms 749.225318ms 749.236881ms 749.245885ms 749.26701ms 749.308111ms 749.313423ms 749.364146ms 749.377662ms 749.396559ms 749.414778ms 749.428316ms 749.540261ms 749.547374ms 749.550785ms 749.567997ms 749.568692ms 749.588217ms 749.628426ms 749.697287ms 749.698838ms 749.703114ms 749.703263ms 749.705072ms 749.720175ms 749.722586ms 749.737741ms 749.760329ms 749.767625ms 749.774708ms 749.775911ms 749.784231ms 749.812801ms 749.867917ms 749.871135ms 749.897434ms 749.922018ms 749.930035ms 749.930673ms 749.937642ms 749.956267ms 749.996188ms 750.065916ms 750.102453ms 750.110441ms 750.119824ms 750.128215ms 750.140667ms 750.161852ms 750.206906ms 750.20842ms 750.212586ms 750.236462ms 750.253435ms 750.289625ms 750.324ms 750.374595ms 750.382167ms 750.434691ms 750.456569ms 750.494428ms 750.500547ms 750.56731ms 750.608168ms 750.610464ms 750.67744ms 750.684282ms 750.689812ms 750.692748ms 750.697321ms 750.733182ms 750.736413ms 750.751746ms 750.757667ms 750.780839ms 750.821181ms 750.825978ms 750.889896ms 750.908567ms 750.962887ms 750.965317ms 750.971492ms 751.016566ms 751.074164ms 751.164137ms 751.199642ms 751.212634ms 751.342314ms 751.392575ms 751.684058ms 751.748759ms 751.893572ms 751.962221ms 752.618665ms 794.944479ms 797.337154ms 798.776152ms 799.006643ms 799.020853ms 799.064541ms 799.197785ms 799.202737ms 799.251353ms 799.258454ms 799.281937ms 799.291565ms 799.422582ms 799.503078ms 799.573487ms 799.661306ms 799.670094ms 799.677068ms 799.679134ms 799.780589ms 799.803325ms 799.809425ms 799.867066ms 799.88523ms 799.898506ms 799.990342ms 800.044045ms 800.257794ms 800.290468ms 800.331277ms 800.333689ms 800.396215ms 800.401854ms 800.441552ms 800.513787ms 800.546121ms 800.736698ms 800.885189ms 800.948634ms 801.16638ms 801.324684ms 802.025298ms 802.345769ms] Nov 5 23:22:35.991: INFO: 50 %ile: 749.930035ms Nov 5 23:22:35.991: INFO: 90 %ile: 799.88523ms Nov 5 23:22:35.991: INFO: 99 %ile: 802.025298ms Nov 5 23:22:35.991: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:22:35.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-115" for this suite. • [SLOW TEST:12.968 seconds] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":-1,"completed":7,"skipped":136,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:22:20.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:22:37.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5370" for this suite. • [SLOW TEST:16.105 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":-1,"completed":6,"skipped":134,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:22:36.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars Nov 5 23:22:36.080: INFO: Waiting up to 5m0s for pod "downward-api-772b994e-4e48-48e8-8e0f-2dc01547bb07" in namespace "downward-api-3370" to be "Succeeded or Failed" Nov 5 23:22:36.086: INFO: Pod "downward-api-772b994e-4e48-48e8-8e0f-2dc01547bb07": Phase="Pending", Reason="", readiness=false. Elapsed: 6.267791ms Nov 5 23:22:38.089: INFO: Pod "downward-api-772b994e-4e48-48e8-8e0f-2dc01547bb07": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009657537s Nov 5 23:22:40.093: INFO: Pod "downward-api-772b994e-4e48-48e8-8e0f-2dc01547bb07": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013374336s STEP: Saw pod success Nov 5 23:22:40.093: INFO: Pod "downward-api-772b994e-4e48-48e8-8e0f-2dc01547bb07" satisfied condition "Succeeded or Failed" Nov 5 23:22:40.095: INFO: Trying to get logs from node node2 pod downward-api-772b994e-4e48-48e8-8e0f-2dc01547bb07 container dapi-container: STEP: delete the pod Nov 5 23:22:40.108: INFO: Waiting for pod downward-api-772b994e-4e48-48e8-8e0f-2dc01547bb07 to disappear Nov 5 23:22:40.110: INFO: Pod downward-api-772b994e-4e48-48e8-8e0f-2dc01547bb07 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:22:40.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3370" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":157,"failed":0} [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:22:40.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should run through a ConfigMap lifecycle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a ConfigMap STEP: fetching the ConfigMap STEP: patching the ConfigMap STEP: listing all ConfigMaps in all namespaces with a label selector STEP: deleting the ConfigMap by collection with a label selector STEP: listing all ConfigMaps in test namespace [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:22:40.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9038" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":-1,"completed":9,"skipped":157,"failed":0} S ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:22:33.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating replication controller my-hostname-basic-df0013f5-b47b-45c8-89dc-353975d0f3af Nov 5 23:22:33.137: INFO: Pod name my-hostname-basic-df0013f5-b47b-45c8-89dc-353975d0f3af: Found 0 pods out of 1 Nov 5 23:22:38.141: INFO: Pod name my-hostname-basic-df0013f5-b47b-45c8-89dc-353975d0f3af: Found 1 pods out of 1 Nov 5 23:22:38.141: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-df0013f5-b47b-45c8-89dc-353975d0f3af" are running Nov 5 23:22:38.146: INFO: Pod "my-hostname-basic-df0013f5-b47b-45c8-89dc-353975d0f3af-gxjlw" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-11-05 23:22:33 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-11-05 23:22:36 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-11-05 23:22:36 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-11-05 23:22:33 +0000 UTC Reason: Message:}]) Nov 5 23:22:38.147: INFO: Trying to dial the pod Nov 5 23:22:43.156: INFO: Controller my-hostname-basic-df0013f5-b47b-45c8-89dc-353975d0f3af: Got expected result from replica 1 [my-hostname-basic-df0013f5-b47b-45c8-89dc-353975d0f3af-gxjlw]: "my-hostname-basic-df0013f5-b47b-45c8-89dc-353975d0f3af-gxjlw", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:22:43.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3119" for this suite. • [SLOW TEST:10.055 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":-1,"completed":7,"skipped":137,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:22:37.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating Pod STEP: Reading file content from the nginx-container Nov 5 23:22:43.132: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-6619 PodName:pod-sharedvolume-8dc62d4c-0bad-4164-98a5-7de7aa40ea82 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 5 23:22:43.132: INFO: >>> kubeConfig: /root/.kube/config Nov 5 23:22:43.226: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:22:43.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6619" for this suite. • [SLOW TEST:6.142 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 pod should support shared volumes between containers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":-1,"completed":7,"skipped":139,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:22:40.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-map-d94076cc-e24b-4116-aad6-4465b82bc908 STEP: Creating a pod to test consume configMaps Nov 5 23:22:40.242: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-04ad62b5-bd58-4429-8227-f4f086a6f87e" in namespace "projected-4029" to be "Succeeded or Failed" Nov 5 23:22:40.246: INFO: Pod "pod-projected-configmaps-04ad62b5-bd58-4429-8227-f4f086a6f87e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.737704ms Nov 5 23:22:42.250: INFO: Pod "pod-projected-configmaps-04ad62b5-bd58-4429-8227-f4f086a6f87e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008212841s Nov 5 23:22:44.254: INFO: Pod "pod-projected-configmaps-04ad62b5-bd58-4429-8227-f4f086a6f87e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01177382s STEP: Saw pod success Nov 5 23:22:44.254: INFO: Pod "pod-projected-configmaps-04ad62b5-bd58-4429-8227-f4f086a6f87e" satisfied condition "Succeeded or Failed" Nov 5 23:22:44.257: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-04ad62b5-bd58-4429-8227-f4f086a6f87e container agnhost-container: STEP: delete the pod Nov 5 23:22:44.272: INFO: Waiting for pod pod-projected-configmaps-04ad62b5-bd58-4429-8227-f4f086a6f87e to disappear Nov 5 23:22:44.274: INFO: Pod pod-projected-configmaps-04ad62b5-bd58-4429-8227-f4f086a6f87e no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:22:44.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4029" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":158,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:22:43.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 5 23:22:43.290: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Nov 5 23:22:48.293: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Nov 5 23:22:48.293: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Nov 5 23:22:48.306: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-4876 b0dda7da-3f49-4412-b620-fdfbf8ba962f 39202 1 2021-11-05 23:22:48 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2021-11-05 23:22:48 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00889ca28 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Nov 5 23:22:48.309: INFO: New ReplicaSet "test-cleanup-deployment-5b4d99b59b" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-5b4d99b59b deployment-4876 d4c1e8fa-95e9-4abc-a3b4-af0d2a585303 39206 1 2021-11-05 23:22:48 +0000 UTC map[name:cleanup-pod pod-template-hash:5b4d99b59b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment b0dda7da-3f49-4412-b620-fdfbf8ba962f 0xc00889d1a7 0xc00889d1a8}] [] [{kube-controller-manager Update apps/v1 2021-11-05 23:22:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b0dda7da-3f49-4412-b620-fdfbf8ba962f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 5b4d99b59b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:5b4d99b59b] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00889d278 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Nov 5 23:22:48.309: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Nov 5 23:22:48.309: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-4876 c1910948-9fa7-4d5d-b171-10ec4134ff70 39204 1 2021-11-05 23:22:43 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment b0dda7da-3f49-4412-b620-fdfbf8ba962f 0xc00889cf67 0xc00889cf68}] [] [{e2e.test Update apps/v1 2021-11-05 23:22:43 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-11-05 23:22:48 +0000 UTC FieldsV1 {"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"b0dda7da-3f49-4412-b620-fdfbf8ba962f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00889d0c8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Nov 5 23:22:48.312: INFO: Pod "test-cleanup-controller-dhbrk" is available: &Pod{ObjectMeta:{test-cleanup-controller-dhbrk test-cleanup-controller- deployment-4876 f9ea7f2c-c396-4f39-8d50-a818722a1326 39178 0 2021-11-05 23:22:43 +0000 UTC map[name:cleanup-pod pod:httpd] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.233" ], "mac": "9e:b5:f2:9f:31:6d", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.233" ], "mac": "9e:b5:f2:9f:31:6d", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-cleanup-controller c1910948-9fa7-4d5d-b171-10ec4134ff70 0xc00889da07 0xc00889da08}] [] [{kube-controller-manager Update v1 2021-11-05 23:22:43 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c1910948-9fa7-4d5d-b171-10ec4134ff70\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-11-05 23:22:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-11-05 23:22:47 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.233\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-rwkqg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rwkqg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:22:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:22:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:22:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:22:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.233,StartTime:2021-11-05 23:22:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-11-05 23:22:47 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://2b4e05db55fb4ba69aa5ea8a187c00725a86c46d34102eac35c10cde7a27a04c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.233,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:22:48.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4876" for this suite. • [SLOW TEST:5.053 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":-1,"completed":8,"skipped":195,"failed":0} S ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:22:48.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting the auto-created API token Nov 5 23:22:48.865: INFO: created pod pod-service-account-defaultsa Nov 5 23:22:48.865: INFO: pod pod-service-account-defaultsa service account token volume mount: true Nov 5 23:22:48.874: INFO: created pod pod-service-account-mountsa Nov 5 23:22:48.874: INFO: pod pod-service-account-mountsa service account token volume mount: true Nov 5 23:22:48.883: INFO: created pod pod-service-account-nomountsa Nov 5 23:22:48.883: INFO: pod pod-service-account-nomountsa service account token volume mount: false Nov 5 23:22:48.892: INFO: created pod pod-service-account-defaultsa-mountspec Nov 5 23:22:48.892: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Nov 5 23:22:48.902: INFO: created pod pod-service-account-mountsa-mountspec Nov 5 23:22:48.902: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Nov 5 23:22:48.910: INFO: created pod pod-service-account-nomountsa-mountspec Nov 5 23:22:48.910: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Nov 5 23:22:48.919: INFO: created pod pod-service-account-defaultsa-nomountspec Nov 5 23:22:48.919: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Nov 5 23:22:48.928: INFO: created pod pod-service-account-mountsa-nomountspec Nov 5 23:22:48.928: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Nov 5 23:22:48.936: INFO: created pod pod-service-account-nomountsa-nomountspec Nov 5 23:22:48.936: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:22:48.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-6604" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":-1,"completed":9,"skipped":196,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:22:43.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars Nov 5 23:22:43.279: INFO: Waiting up to 5m0s for pod "downward-api-abbc33c6-3fa3-441f-be13-833734c07eb6" in namespace "downward-api-4650" to be "Succeeded or Failed" Nov 5 23:22:43.285: INFO: Pod "downward-api-abbc33c6-3fa3-441f-be13-833734c07eb6": Phase="Pending", Reason="", readiness=false. Elapsed: 5.713345ms Nov 5 23:22:45.291: INFO: Pod "downward-api-abbc33c6-3fa3-441f-be13-833734c07eb6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011160375s Nov 5 23:22:47.295: INFO: Pod "downward-api-abbc33c6-3fa3-441f-be13-833734c07eb6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.015823999s Nov 5 23:22:49.299: INFO: Pod "downward-api-abbc33c6-3fa3-441f-be13-833734c07eb6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.019162995s STEP: Saw pod success Nov 5 23:22:49.299: INFO: Pod "downward-api-abbc33c6-3fa3-441f-be13-833734c07eb6" satisfied condition "Succeeded or Failed" Nov 5 23:22:49.301: INFO: Trying to get logs from node node2 pod downward-api-abbc33c6-3fa3-441f-be13-833734c07eb6 container dapi-container: STEP: delete the pod Nov 5 23:22:49.320: INFO: Waiting for pod downward-api-abbc33c6-3fa3-441f-be13-833734c07eb6 to disappear Nov 5 23:22:49.322: INFO: Pod downward-api-abbc33c6-3fa3-441f-be13-833734c07eb6 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:22:49.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4650" for this suite. • [SLOW TEST:6.082 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:22:44.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Nov 5 23:22:44.338: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4c63b0b6-9303-4b82-bbf2-ee13f5c1272c" in namespace "projected-3420" to be "Succeeded or Failed" Nov 5 23:22:44.342: INFO: Pod "downwardapi-volume-4c63b0b6-9303-4b82-bbf2-ee13f5c1272c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.144362ms Nov 5 23:22:46.346: INFO: Pod "downwardapi-volume-4c63b0b6-9303-4b82-bbf2-ee13f5c1272c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007886531s Nov 5 23:22:48.349: INFO: Pod "downwardapi-volume-4c63b0b6-9303-4b82-bbf2-ee13f5c1272c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010863001s Nov 5 23:22:50.352: INFO: Pod "downwardapi-volume-4c63b0b6-9303-4b82-bbf2-ee13f5c1272c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014249517s STEP: Saw pod success Nov 5 23:22:50.352: INFO: Pod "downwardapi-volume-4c63b0b6-9303-4b82-bbf2-ee13f5c1272c" satisfied condition "Succeeded or Failed" Nov 5 23:22:50.355: INFO: Trying to get logs from node node2 pod downwardapi-volume-4c63b0b6-9303-4b82-bbf2-ee13f5c1272c container client-container: STEP: delete the pod Nov 5 23:22:50.368: INFO: Waiting for pod downwardapi-volume-4c63b0b6-9303-4b82-bbf2-ee13f5c1272c to disappear Nov 5 23:22:50.369: INFO: Pod downwardapi-volume-4c63b0b6-9303-4b82-bbf2-ee13f5c1272c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:22:50.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3420" for this suite. • [SLOW TEST:6.071 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":170,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:22:31.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 5 23:22:31.426: INFO: The status of Pod test-webserver-993745ec-262a-44fa-9920-56c8b971facb is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:22:33.429: INFO: The status of Pod test-webserver-993745ec-262a-44fa-9920-56c8b971facb is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:22:35.431: INFO: The status of Pod test-webserver-993745ec-262a-44fa-9920-56c8b971facb is Running (Ready = false) Nov 5 23:22:37.430: INFO: The status of Pod test-webserver-993745ec-262a-44fa-9920-56c8b971facb is Running (Ready = false) Nov 5 23:22:39.430: INFO: The status of Pod test-webserver-993745ec-262a-44fa-9920-56c8b971facb is Running (Ready = false) Nov 5 23:22:41.431: INFO: The status of Pod test-webserver-993745ec-262a-44fa-9920-56c8b971facb is Running (Ready = false) Nov 5 23:22:43.429: INFO: The status of Pod test-webserver-993745ec-262a-44fa-9920-56c8b971facb is Running (Ready = false) Nov 5 23:22:45.430: INFO: The status of Pod test-webserver-993745ec-262a-44fa-9920-56c8b971facb is Running (Ready = false) Nov 5 23:22:47.431: INFO: The status of Pod test-webserver-993745ec-262a-44fa-9920-56c8b971facb is Running (Ready = false) Nov 5 23:22:49.430: INFO: The status of Pod test-webserver-993745ec-262a-44fa-9920-56c8b971facb is Running (Ready = false) Nov 5 23:22:51.431: INFO: The status of Pod test-webserver-993745ec-262a-44fa-9920-56c8b971facb is Running (Ready = false) Nov 5 23:22:53.429: INFO: The status of Pod test-webserver-993745ec-262a-44fa-9920-56c8b971facb is Running (Ready = false) Nov 5 23:22:55.429: INFO: The status of Pod test-webserver-993745ec-262a-44fa-9920-56c8b971facb is Running (Ready = true) Nov 5 23:22:55.431: INFO: Container started at 2021-11-05 23:22:34 +0000 UTC, pod became ready at 2021-11-05 23:22:51 +0000 UTC [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:22:55.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7509" for this suite. • [SLOW TEST:24.047 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":140,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":145,"failed":0} [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:22:49.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:23:00.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-616" for this suite. • [SLOW TEST:11.062 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":-1,"completed":9,"skipped":145,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:23:00.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create set of events Nov 5 23:23:00.441: INFO: created test-event-1 Nov 5 23:23:00.444: INFO: created test-event-2 Nov 5 23:23:00.448: INFO: created test-event-3 STEP: get a list of Events with a label in the current namespace STEP: delete collection of events Nov 5 23:23:00.450: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity Nov 5 23:23:00.466: INFO: requesting list of events to confirm quantity [AfterEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:23:00.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-9239" for this suite. • ------------------------------ {"msg":"PASSED [sig-instrumentation] Events should delete a collection of events [Conformance]","total":-1,"completed":10,"skipped":154,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:21:56.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 5 23:21:56.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Nov 5 23:22:03.899: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-11-05T23:22:03Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-11-05T23:22:03Z]] name:name1 resourceVersion:36551 uid:b151e8d3-8678-4d6f-9409-f62e50e4d84e] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Nov 5 23:22:13.904: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-11-05T23:22:13Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-11-05T23:22:13Z]] name:name2 resourceVersion:36760 uid:2c86396b-4979-4ab5-91b7-aac1ac98fd28] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Nov 5 23:22:23.909: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-11-05T23:22:03Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-11-05T23:22:23Z]] name:name1 resourceVersion:36968 uid:b151e8d3-8678-4d6f-9409-f62e50e4d84e] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Nov 5 23:22:33.914: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-11-05T23:22:13Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-11-05T23:22:33Z]] name:name2 resourceVersion:37925 uid:2c86396b-4979-4ab5-91b7-aac1ac98fd28] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Nov 5 23:22:43.919: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-11-05T23:22:03Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-11-05T23:22:23Z]] name:name1 resourceVersion:39117 uid:b151e8d3-8678-4d6f-9409-f62e50e4d84e] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Nov 5 23:22:53.925: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-11-05T23:22:13Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-11-05T23:22:33Z]] name:name2 resourceVersion:39388 uid:2c86396b-4979-4ab5-91b7-aac1ac98fd28] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:23:04.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-5613" for this suite. • [SLOW TEST:67.622 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":-1,"completed":5,"skipped":80,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:23:00.505: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:23:04.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6208" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":170,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:22:55.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] Replicaset should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating replica set "test-rs" that asks for more than the allowed pod quota Nov 5 23:22:55.560: INFO: Pod name sample-pod: Found 0 pods out of 1 Nov 5 23:23:00.563: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the replicaset Spec.Replicas was modified STEP: Patch a scale subresource [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:23:04.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-1532" for this suite. • [SLOW TEST:9.055 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Replicaset should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]","total":-1,"completed":9,"skipped":192,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:22:48.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2004.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-2004.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2004.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2004.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-2004.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2004.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Nov 5 23:23:05.013: INFO: DNS probes using dns-2004/dns-test-e35e97c0-c1f1-449e-8185-89b4aa381e20 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:23:05.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2004" for this suite. • [SLOW TEST:16.075 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":-1,"completed":10,"skipped":200,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:23:04.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Nov 5 23:23:04.607: INFO: Waiting up to 5m0s for pod "downwardapi-volume-61c037e6-675c-47f0-99aa-3a2d6ad457dd" in namespace "downward-api-1549" to be "Succeeded or Failed" Nov 5 23:23:04.609: INFO: Pod "downwardapi-volume-61c037e6-675c-47f0-99aa-3a2d6ad457dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.235306ms Nov 5 23:23:06.613: INFO: Pod "downwardapi-volume-61c037e6-675c-47f0-99aa-3a2d6ad457dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005455103s Nov 5 23:23:08.616: INFO: Pod "downwardapi-volume-61c037e6-675c-47f0-99aa-3a2d6ad457dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008706677s STEP: Saw pod success Nov 5 23:23:08.616: INFO: Pod "downwardapi-volume-61c037e6-675c-47f0-99aa-3a2d6ad457dd" satisfied condition "Succeeded or Failed" Nov 5 23:23:08.619: INFO: Trying to get logs from node node1 pod downwardapi-volume-61c037e6-675c-47f0-99aa-3a2d6ad457dd container client-container: STEP: delete the pod Nov 5 23:23:08.633: INFO: Waiting for pod downwardapi-volume-61c037e6-675c-47f0-99aa-3a2d6ad457dd to disappear Nov 5 23:23:08.636: INFO: Pod downwardapi-volume-61c037e6-675c-47f0-99aa-3a2d6ad457dd no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:23:08.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1549" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":183,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:22:50.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. Nov 5 23:22:50.482: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:22:52.485: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:22:54.485: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the pod with lifecycle hook Nov 5 23:22:54.500: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:22:56.505: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:22:58.504: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:23:00.503: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:23:02.504: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:23:04.507: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = true) STEP: check poststart hook STEP: delete the pod with lifecycle hook Nov 5 23:23:04.526: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Nov 5 23:23:04.529: INFO: Pod pod-with-poststart-exec-hook still exists Nov 5 23:23:06.531: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Nov 5 23:23:06.533: INFO: Pod pod-with-poststart-exec-hook still exists Nov 5 23:23:08.529: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Nov 5 23:23:08.533: INFO: Pod pod-with-poststart-exec-hook still exists Nov 5 23:23:10.529: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Nov 5 23:23:10.532: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:23:10.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4482" for this suite. • [SLOW TEST:20.095 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":208,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:23:04.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0777 on node default medium Nov 5 23:23:04.495: INFO: Waiting up to 5m0s for pod "pod-ece08c78-5ef0-4e0d-b005-6135b82ec189" in namespace "emptydir-2191" to be "Succeeded or Failed" Nov 5 23:23:04.500: INFO: Pod "pod-ece08c78-5ef0-4e0d-b005-6135b82ec189": Phase="Pending", Reason="", readiness=false. Elapsed: 4.319359ms Nov 5 23:23:06.504: INFO: Pod "pod-ece08c78-5ef0-4e0d-b005-6135b82ec189": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008094776s Nov 5 23:23:08.508: INFO: Pod "pod-ece08c78-5ef0-4e0d-b005-6135b82ec189": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012294165s Nov 5 23:23:10.511: INFO: Pod "pod-ece08c78-5ef0-4e0d-b005-6135b82ec189": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015149667s Nov 5 23:23:12.515: INFO: Pod "pod-ece08c78-5ef0-4e0d-b005-6135b82ec189": Phase="Pending", Reason="", readiness=false. Elapsed: 8.019459308s Nov 5 23:23:14.518: INFO: Pod "pod-ece08c78-5ef0-4e0d-b005-6135b82ec189": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.022937378s STEP: Saw pod success Nov 5 23:23:14.518: INFO: Pod "pod-ece08c78-5ef0-4e0d-b005-6135b82ec189" satisfied condition "Succeeded or Failed" Nov 5 23:23:14.521: INFO: Trying to get logs from node node2 pod pod-ece08c78-5ef0-4e0d-b005-6135b82ec189 container test-container: STEP: delete the pod Nov 5 23:23:14.557: INFO: Waiting for pod pod-ece08c78-5ef0-4e0d-b005-6135b82ec189 to disappear Nov 5 23:23:14.559: INFO: Pod pod-ece08c78-5ef0-4e0d-b005-6135b82ec189 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:23:14.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2191" for this suite. • [SLOW TEST:10.104 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":89,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:23:04.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-map-9efc84c1-51db-439e-b92d-0eef9506433b STEP: Creating a pod to test consume secrets Nov 5 23:23:04.655: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a6b6fd7b-1f5d-4e1e-936a-773a269f7991" in namespace "projected-9514" to be "Succeeded or Failed" Nov 5 23:23:04.657: INFO: Pod "pod-projected-secrets-a6b6fd7b-1f5d-4e1e-936a-773a269f7991": Phase="Pending", Reason="", readiness=false. Elapsed: 2.259567ms Nov 5 23:23:06.661: INFO: Pod "pod-projected-secrets-a6b6fd7b-1f5d-4e1e-936a-773a269f7991": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006274967s Nov 5 23:23:08.664: INFO: Pod "pod-projected-secrets-a6b6fd7b-1f5d-4e1e-936a-773a269f7991": Phase="Pending", Reason="", readiness=false. Elapsed: 4.00886597s Nov 5 23:23:10.667: INFO: Pod "pod-projected-secrets-a6b6fd7b-1f5d-4e1e-936a-773a269f7991": Phase="Pending", Reason="", readiness=false. Elapsed: 6.011593772s Nov 5 23:23:12.671: INFO: Pod "pod-projected-secrets-a6b6fd7b-1f5d-4e1e-936a-773a269f7991": Phase="Pending", Reason="", readiness=false. Elapsed: 8.015523902s Nov 5 23:23:14.674: INFO: Pod "pod-projected-secrets-a6b6fd7b-1f5d-4e1e-936a-773a269f7991": Phase="Pending", Reason="", readiness=false. Elapsed: 10.019079024s Nov 5 23:23:16.678: INFO: Pod "pod-projected-secrets-a6b6fd7b-1f5d-4e1e-936a-773a269f7991": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.022813s STEP: Saw pod success Nov 5 23:23:16.678: INFO: Pod "pod-projected-secrets-a6b6fd7b-1f5d-4e1e-936a-773a269f7991" satisfied condition "Succeeded or Failed" Nov 5 23:23:16.680: INFO: Trying to get logs from node node2 pod pod-projected-secrets-a6b6fd7b-1f5d-4e1e-936a-773a269f7991 container projected-secret-volume-test: STEP: delete the pod Nov 5 23:23:16.696: INFO: Waiting for pod pod-projected-secrets-a6b6fd7b-1f5d-4e1e-936a-773a269f7991 to disappear Nov 5 23:23:16.699: INFO: Pod pod-projected-secrets-a6b6fd7b-1f5d-4e1e-936a-773a269f7991 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:23:16.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9514" for this suite. • [SLOW TEST:12.084 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":210,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:23:05.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-6a9c6bfe-bbf8-4ac2-ac1f-0dfdb772e57e STEP: Creating a pod to test consume configMaps Nov 5 23:23:05.077: INFO: Waiting up to 5m0s for pod "pod-configmaps-a9eea756-da64-4fa2-9164-44012845ecf8" in namespace "configmap-9087" to be "Succeeded or Failed" Nov 5 23:23:05.080: INFO: Pod "pod-configmaps-a9eea756-da64-4fa2-9164-44012845ecf8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.406043ms Nov 5 23:23:07.083: INFO: Pod "pod-configmaps-a9eea756-da64-4fa2-9164-44012845ecf8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006756467s Nov 5 23:23:09.087: INFO: Pod "pod-configmaps-a9eea756-da64-4fa2-9164-44012845ecf8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010569612s Nov 5 23:23:11.093: INFO: Pod "pod-configmaps-a9eea756-da64-4fa2-9164-44012845ecf8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016030344s Nov 5 23:23:13.096: INFO: Pod "pod-configmaps-a9eea756-da64-4fa2-9164-44012845ecf8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.019562984s Nov 5 23:23:15.099: INFO: Pod "pod-configmaps-a9eea756-da64-4fa2-9164-44012845ecf8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.022372048s Nov 5 23:23:17.102: INFO: Pod "pod-configmaps-a9eea756-da64-4fa2-9164-44012845ecf8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.025840577s STEP: Saw pod success Nov 5 23:23:17.102: INFO: Pod "pod-configmaps-a9eea756-da64-4fa2-9164-44012845ecf8" satisfied condition "Succeeded or Failed" Nov 5 23:23:17.105: INFO: Trying to get logs from node node2 pod pod-configmaps-a9eea756-da64-4fa2-9164-44012845ecf8 container agnhost-container: STEP: delete the pod Nov 5 23:23:17.118: INFO: Waiting for pod pod-configmaps-a9eea756-da64-4fa2-9164-44012845ecf8 to disappear Nov 5 23:23:17.120: INFO: Pod pod-configmaps-a9eea756-da64-4fa2-9164-44012845ecf8 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:23:17.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9087" for this suite. • [SLOW TEST:12.086 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":207,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:23:08.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating the pod Nov 5 23:23:08.749: INFO: The status of Pod annotationupdatedd03d3fd-abf0-459f-b96e-af65cde9a196 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:23:10.752: INFO: The status of Pod annotationupdatedd03d3fd-abf0-459f-b96e-af65cde9a196 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:23:12.754: INFO: The status of Pod annotationupdatedd03d3fd-abf0-459f-b96e-af65cde9a196 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:23:14.753: INFO: The status of Pod annotationupdatedd03d3fd-abf0-459f-b96e-af65cde9a196 is Running (Ready = true) Nov 5 23:23:15.272: INFO: Successfully updated pod "annotationupdatedd03d3fd-abf0-459f-b96e-af65cde9a196" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:23:17.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9498" for this suite. • [SLOW TEST:8.606 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":218,"failed":0} S ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:21:41.652: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-3355 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-3355 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3355 Nov 5 23:21:41.686: INFO: Found 0 stateful pods, waiting for 1 Nov 5 23:21:51.691: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Nov 5 23:21:51.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3355 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Nov 5 23:21:51.930: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Nov 5 23:21:51.931: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Nov 5 23:21:51.931: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Nov 5 23:21:51.934: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Nov 5 23:22:01.938: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Nov 5 23:22:01.938: INFO: Waiting for statefulset status.replicas updated to 0 Nov 5 23:22:01.950: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999457s Nov 5 23:22:02.953: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.997325161s Nov 5 23:22:03.957: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.994131886s Nov 5 23:22:04.959: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.99026415s Nov 5 23:22:05.962: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.987460955s Nov 5 23:22:06.965: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.98437865s Nov 5 23:22:07.968: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.981794586s Nov 5 23:22:08.972: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.978581772s Nov 5 23:22:09.974: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.974976505s Nov 5 23:22:10.977: INFO: Verifying statefulset ss doesn't scale past 1 for another 972.240054ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3355 Nov 5 23:22:11.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3355 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 5 23:22:12.253: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Nov 5 23:22:12.253: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Nov 5 23:22:12.253: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Nov 5 23:22:12.256: INFO: Found 1 stateful pods, waiting for 3 Nov 5 23:22:22.260: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Nov 5 23:22:22.260: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Nov 5 23:22:22.260: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Nov 5 23:22:32.265: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Nov 5 23:22:32.265: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Nov 5 23:22:32.265: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Nov 5 23:22:32.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3355 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Nov 5 23:22:32.506: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Nov 5 23:22:32.506: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Nov 5 23:22:32.506: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Nov 5 23:22:32.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3355 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Nov 5 23:22:32.758: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Nov 5 23:22:32.758: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Nov 5 23:22:32.758: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Nov 5 23:22:32.758: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3355 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Nov 5 23:22:33.242: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Nov 5 23:22:33.242: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Nov 5 23:22:33.242: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Nov 5 23:22:33.242: INFO: Waiting for statefulset status.replicas updated to 0 Nov 5 23:22:33.244: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Nov 5 23:22:43.250: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Nov 5 23:22:43.250: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Nov 5 23:22:43.250: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Nov 5 23:22:43.258: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999481s Nov 5 23:22:44.262: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.996425753s Nov 5 23:22:45.265: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.993448078s Nov 5 23:22:46.268: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.990233792s Nov 5 23:22:47.272: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.986894976s Nov 5 23:22:48.277: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.982485229s Nov 5 23:22:49.283: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.978182587s Nov 5 23:22:50.288: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.970878853s Nov 5 23:22:51.293: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.966189434s Nov 5 23:22:52.298: INFO: Verifying statefulset ss doesn't scale past 3 for another 960.590744ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3355 Nov 5 23:22:53.302: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3355 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 5 23:22:53.584: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Nov 5 23:22:53.584: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Nov 5 23:22:53.584: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Nov 5 23:22:53.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3355 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 5 23:22:53.865: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Nov 5 23:22:53.865: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Nov 5 23:22:53.865: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Nov 5 23:22:53.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3355 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 5 23:22:54.120: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Nov 5 23:22:54.120: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Nov 5 23:22:54.120: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Nov 5 23:22:54.120: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Nov 5 23:23:24.132: INFO: Deleting all statefulset in ns statefulset-3355 Nov 5 23:23:24.134: INFO: Scaling statefulset ss to 0 Nov 5 23:23:24.142: INFO: Waiting for statefulset status.replicas updated to 0 Nov 5 23:23:24.144: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:23:24.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3355" for this suite. • [SLOW TEST:102.506 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":-1,"completed":6,"skipped":159,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:23:17.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication Nov 5 23:23:17.462: INFO: role binding webhook-auth-reader already exists STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Nov 5 23:23:17.473: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Nov 5 23:23:19.481: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751397, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751397, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751397, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751397, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 5 23:23:21.486: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751397, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751397, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751397, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751397, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 5 23:23:24.492: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:23:24.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5273" for this suite. STEP: Destroying namespace "webhook-5273-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.345 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":-1,"completed":12,"skipped":244,"failed":0} S ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:23:16.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Nov 5 23:23:17.265: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Nov 5 23:23:19.276: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751397, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751397, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751397, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751397, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 5 23:23:21.283: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751397, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751397, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751397, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751397, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 5 23:23:24.286: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:23:36.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9826" for this suite. STEP: Destroying namespace "webhook-9826-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:19.629 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":-1,"completed":11,"skipped":241,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:23:36.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Nov 5 23:23:36.496: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-1201 75c24b46-755e-493f-8c98-28a8d6591597 40462 0 2021-11-05 23:23:36 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-11-05 23:23:36 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Nov 5 23:23:36.496: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-1201 75c24b46-755e-493f-8c98-28a8d6591597 40463 0 2021-11-05 23:23:36 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-11-05 23:23:36 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Nov 5 23:23:36.506: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-1201 75c24b46-755e-493f-8c98-28a8d6591597 40464 0 2021-11-05 23:23:36 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-11-05 23:23:36 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Nov 5 23:23:36.506: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-1201 75c24b46-755e-493f-8c98-28a8d6591597 40465 0 2021-11-05 23:23:36 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-11-05 23:23:36 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:23:36.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1201" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":-1,"completed":12,"skipped":276,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:23:24.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] should run the lifecycle of a Deployment [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Deployment STEP: waiting for Deployment to be created STEP: waiting for all Replicas to be Ready Nov 5 23:23:24.572: INFO: observed Deployment test-deployment in namespace deployment-1653 with ReadyReplicas 0 and labels map[test-deployment-static:true] Nov 5 23:23:24.572: INFO: observed Deployment test-deployment in namespace deployment-1653 with ReadyReplicas 0 and labels map[test-deployment-static:true] Nov 5 23:23:24.576: INFO: observed Deployment test-deployment in namespace deployment-1653 with ReadyReplicas 0 and labels map[test-deployment-static:true] Nov 5 23:23:24.576: INFO: observed Deployment test-deployment in namespace deployment-1653 with ReadyReplicas 0 and labels map[test-deployment-static:true] Nov 5 23:23:24.584: INFO: observed Deployment test-deployment in namespace deployment-1653 with ReadyReplicas 0 and labels map[test-deployment-static:true] Nov 5 23:23:24.584: INFO: observed Deployment test-deployment in namespace deployment-1653 with ReadyReplicas 0 and labels map[test-deployment-static:true] Nov 5 23:23:24.598: INFO: observed Deployment test-deployment in namespace deployment-1653 with ReadyReplicas 0 and labels map[test-deployment-static:true] Nov 5 23:23:24.598: INFO: observed Deployment test-deployment in namespace deployment-1653 with ReadyReplicas 0 and labels map[test-deployment-static:true] Nov 5 23:23:27.430: INFO: observed Deployment test-deployment in namespace deployment-1653 with ReadyReplicas 1 and labels map[test-deployment-static:true] Nov 5 23:23:27.430: INFO: observed Deployment test-deployment in namespace deployment-1653 with ReadyReplicas 1 and labels map[test-deployment-static:true] Nov 5 23:23:28.154: INFO: observed Deployment test-deployment in namespace deployment-1653 with ReadyReplicas 2 and labels map[test-deployment-static:true] STEP: patching the Deployment Nov 5 23:23:28.160: INFO: observed event type ADDED STEP: waiting for Replicas to scale Nov 5 23:23:28.161: INFO: observed Deployment test-deployment in namespace deployment-1653 with ReadyReplicas 0 Nov 5 23:23:28.161: INFO: observed Deployment test-deployment in namespace deployment-1653 with ReadyReplicas 0 Nov 5 23:23:28.162: INFO: observed Deployment test-deployment in namespace deployment-1653 with ReadyReplicas 0 Nov 5 23:23:28.162: INFO: observed Deployment test-deployment in namespace deployment-1653 with ReadyReplicas 0 Nov 5 23:23:28.162: INFO: observed Deployment test-deployment in namespace deployment-1653 with ReadyReplicas 0 Nov 5 23:23:28.162: INFO: observed Deployment test-deployment in namespace deployment-1653 with ReadyReplicas 0 Nov 5 23:23:28.162: INFO: observed Deployment test-deployment in namespace deployment-1653 with ReadyReplicas 0 Nov 5 23:23:28.162: INFO: observed Deployment test-deployment in namespace deployment-1653 with ReadyReplicas 0 Nov 5 23:23:28.162: INFO: observed Deployment test-deployment in namespace deployment-1653 with ReadyReplicas 1 Nov 5 23:23:28.162: INFO: observed Deployment test-deployment in namespace deployment-1653 with ReadyReplicas 1 Nov 5 23:23:28.162: INFO: observed Deployment test-deployment in namespace deployment-1653 with ReadyReplicas 2 Nov 5 23:23:28.162: INFO: observed Deployment test-deployment in namespace deployment-1653 with ReadyReplicas 2 Nov 5 23:23:28.162: INFO: observed Deployment test-deployment in namespace deployment-1653 with ReadyReplicas 2 Nov 5 23:23:28.162: INFO: observed Deployment test-deployment in namespace deployment-1653 with ReadyReplicas 2 Nov 5 23:23:28.165: INFO: observed Deployment test-deployment in namespace deployment-1653 with ReadyReplicas 2 Nov 5 23:23:28.165: INFO: observed Deployment test-deployment in namespace deployment-1653 with ReadyReplicas 2 Nov 5 23:23:28.172: INFO: observed Deployment test-deployment in namespace deployment-1653 with ReadyReplicas 2 Nov 5 23:23:28.172: INFO: observed Deployment test-deployment in namespace deployment-1653 with ReadyReplicas 2 Nov 5 23:23:28.176: INFO: observed Deployment test-deployment in namespace deployment-1653 with ReadyReplicas 1 Nov 5 23:23:28.176: INFO: observed Deployment test-deployment in namespace deployment-1653 with ReadyReplicas 1 Nov 5 23:23:28.186: INFO: observed Deployment test-deployment in namespace deployment-1653 with ReadyReplicas 1 Nov 5 23:23:28.186: INFO: observed Deployment test-deployment in namespace deployment-1653 with ReadyReplicas 1 Nov 5 23:23:32.678: INFO: observed Deployment test-deployment in namespace deployment-1653 with ReadyReplicas 2 Nov 5 23:23:32.678: INFO: observed Deployment test-deployment in namespace deployment-1653 with ReadyReplicas 2 Nov 5 23:23:32.690: INFO: observed Deployment test-deployment in namespace deployment-1653 with ReadyReplicas 1 STEP: listing Deployments Nov 5 23:23:32.694: INFO: Found test-deployment with labels: map[test-deployment:patched test-deployment-static:true] STEP: updating the Deployment Nov 5 23:23:32.707: INFO: observed Deployment test-deployment in namespace deployment-1653 with ReadyReplicas 1 STEP: fetching the DeploymentStatus Nov 5 23:23:32.714: INFO: observed Deployment test-deployment in namespace deployment-1653 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Nov 5 23:23:32.714: INFO: observed Deployment test-deployment in namespace deployment-1653 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Nov 5 23:23:32.718: INFO: observed Deployment test-deployment in namespace deployment-1653 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Nov 5 23:23:32.726: INFO: observed Deployment test-deployment in namespace deployment-1653 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Nov 5 23:23:32.734: INFO: observed Deployment test-deployment in namespace deployment-1653 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Nov 5 23:23:35.342: INFO: observed Deployment test-deployment in namespace deployment-1653 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Nov 5 23:23:36.267: INFO: observed Deployment test-deployment in namespace deployment-1653 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] Nov 5 23:23:36.278: INFO: observed Deployment test-deployment in namespace deployment-1653 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Nov 5 23:23:36.286: INFO: observed Deployment test-deployment in namespace deployment-1653 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Nov 5 23:23:40.681: INFO: observed Deployment test-deployment in namespace deployment-1653 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] STEP: patching the DeploymentStatus STEP: fetching the DeploymentStatus Nov 5 23:23:40.703: INFO: observed Deployment test-deployment in namespace deployment-1653 with ReadyReplicas 1 Nov 5 23:23:40.704: INFO: observed Deployment test-deployment in namespace deployment-1653 with ReadyReplicas 1 Nov 5 23:23:40.704: INFO: observed Deployment test-deployment in namespace deployment-1653 with ReadyReplicas 1 Nov 5 23:23:40.704: INFO: observed Deployment test-deployment in namespace deployment-1653 with ReadyReplicas 1 Nov 5 23:23:40.704: INFO: observed Deployment test-deployment in namespace deployment-1653 with ReadyReplicas 1 Nov 5 23:23:40.704: INFO: observed Deployment test-deployment in namespace deployment-1653 with ReadyReplicas 2 Nov 5 23:23:40.704: INFO: observed Deployment test-deployment in namespace deployment-1653 with ReadyReplicas 3 Nov 5 23:23:40.704: INFO: observed Deployment test-deployment in namespace deployment-1653 with ReadyReplicas 2 Nov 5 23:23:40.704: INFO: observed Deployment test-deployment in namespace deployment-1653 with ReadyReplicas 2 Nov 5 23:23:40.704: INFO: observed Deployment test-deployment in namespace deployment-1653 with ReadyReplicas 3 STEP: deleting the Deployment Nov 5 23:23:40.710: INFO: observed event type MODIFIED Nov 5 23:23:40.710: INFO: observed event type MODIFIED Nov 5 23:23:40.710: INFO: observed event type MODIFIED Nov 5 23:23:40.711: INFO: observed event type MODIFIED Nov 5 23:23:40.711: INFO: observed event type MODIFIED Nov 5 23:23:40.711: INFO: observed event type MODIFIED Nov 5 23:23:40.711: INFO: observed event type MODIFIED Nov 5 23:23:40.711: INFO: observed event type MODIFIED Nov 5 23:23:40.711: INFO: observed event type MODIFIED Nov 5 23:23:40.711: INFO: observed event type MODIFIED Nov 5 23:23:40.711: INFO: observed event type MODIFIED [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Nov 5 23:23:40.713: INFO: Log out all the ReplicaSets if there is no deployment created Nov 5 23:23:40.716: INFO: ReplicaSet "test-deployment-748588b7cd": &ReplicaSet{ObjectMeta:{test-deployment-748588b7cd deployment-1653 163d048b-c4ec-4bd8-8aa9-9edd2a183198 40513 4 2021-11-05 23:23:28 +0000 UTC map[pod-template-hash:748588b7cd test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-deployment a0043e61-4249-4194-a263-72e1ad1723b4 0xc004923507 0xc004923508}] [] [{kube-controller-manager Update apps/v1 2021-11-05 23:23:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a0043e61-4249-4194-a263-72e1ad1723b4\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 748588b7cd,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:748588b7cd test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/pause:3.4.1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004923570 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:4,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Nov 5 23:23:40.719: INFO: ReplicaSet "test-deployment-7b4c744884": &ReplicaSet{ObjectMeta:{test-deployment-7b4c744884 deployment-1653 76402b55-a5c6-44df-ac4b-c172e2425534 40363 3 2021-11-05 23:23:24 +0000 UTC map[pod-template-hash:7b4c744884 test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-deployment a0043e61-4249-4194-a263-72e1ad1723b4 0xc0049235d7 0xc0049235d8}] [] [{kube-controller-manager Update apps/v1 2021-11-05 23:23:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a0043e61-4249-4194-a263-72e1ad1723b4\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 7b4c744884,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:7b4c744884 test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004923640 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Nov 5 23:23:40.721: INFO: ReplicaSet "test-deployment-85d87c6f4b": &ReplicaSet{ObjectMeta:{test-deployment-85d87c6f4b deployment-1653 5f23fd47-8660-497d-a9d0-70cbc9d4657d 40504 2 2021-11-05 23:23:32 +0000 UTC map[pod-template-hash:85d87c6f4b test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:3] [{apps/v1 Deployment test-deployment a0043e61-4249-4194-a263-72e1ad1723b4 0xc0049236a7 0xc0049236a8}] [] [{kube-controller-manager Update apps/v1 2021-11-05 23:23:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a0043e61-4249-4194-a263-72e1ad1723b4\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 85d87c6f4b,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:85d87c6f4b test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004923710 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:2,FullyLabeledReplicas:2,ObservedGeneration:2,ReadyReplicas:2,AvailableReplicas:2,Conditions:[]ReplicaSetCondition{},},} Nov 5 23:23:40.724: INFO: pod: "test-deployment-85d87c6f4b-44v5c": &Pod{ObjectMeta:{test-deployment-85d87c6f4b-44v5c test-deployment-85d87c6f4b- deployment-1653 fac710cd-2d1a-487b-b7d7-fd29ca9a5945 40503 0 2021-11-05 23:23:36 +0000 UTC map[pod-template-hash:85d87c6f4b test-deployment-static:true] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.197" ], "mac": "da:fb:f9:36:49:46", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.197" ], "mac": "da:fb:f9:36:49:46", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-deployment-85d87c6f4b 5f23fd47-8660-497d-a9d0-70cbc9d4657d 0xc004923987 0xc004923988}] [] [{kube-controller-manager Update v1 2021-11-05 23:23:36 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5f23fd47-8660-497d-a9d0-70cbc9d4657d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-11-05 23:23:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-11-05 23:23:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.197\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-t7mjw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t7mjw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:23:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:23:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:23:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:23:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.197,StartTime:2021-11-05 23:23:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-11-05 23:23:39 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://5b96f1e2a2499a2eaede2445d5070dcdc68efffd3bbde9d827d07e91de229266,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.197,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 5 23:23:40.725: INFO: pod: "test-deployment-85d87c6f4b-4h29m": &Pod{ObjectMeta:{test-deployment-85d87c6f4b-4h29m test-deployment-85d87c6f4b- deployment-1653 63ed3e71-0987-43d7-8d60-f432cf7543de 40421 0 2021-11-05 23:23:32 +0000 UTC map[pod-template-hash:85d87c6f4b test-deployment-static:true] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.196" ], "mac": "6a:37:0a:d0:58:df", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.196" ], "mac": "6a:37:0a:d0:58:df", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-deployment-85d87c6f4b 5f23fd47-8660-497d-a9d0-70cbc9d4657d 0xc004923b9f 0xc004923bb0}] [] [{kube-controller-manager Update v1 2021-11-05 23:23:32 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5f23fd47-8660-497d-a9d0-70cbc9d4657d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-11-05 23:23:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-11-05 23:23:36 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.196\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-p5grb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p5grb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:23:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:23:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:23:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:23:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.196,StartTime:2021-11-05 23:23:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-11-05 23:23:35 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://3555a7a60707142c389417c06f6fc8bd22ee0973f06aba45ae36149111280027,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.196,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:23:40.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1653" for this suite. • [SLOW TEST:16.198 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run the lifecycle of a Deployment [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":-1,"completed":13,"skipped":245,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:23:36.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should update/patch PodDisruptionBudget status [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Waiting for the pdb to be processed STEP: Updating PodDisruptionBudget status STEP: Waiting for all pods to be running Nov 5 23:23:38.617: INFO: running pods: 0 < 1 Nov 5 23:23:40.620: INFO: running pods: 0 < 1 Nov 5 23:23:42.631: INFO: running pods: 0 < 1 STEP: locating a running pod STEP: Waiting for the pdb to be processed STEP: Patching PodDisruptionBudget status STEP: Waiting for the pdb to be processed [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:23:44.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-6334" for this suite. • [SLOW TEST:8.082 seconds] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update/patch PodDisruptionBudget status [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]","total":-1,"completed":13,"skipped":311,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:23:40.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8616.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8616.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8616.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8616.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8616.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-8616.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8616.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-8616.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8616.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-8616.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8616.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-8616.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8616.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 43.61.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.61.43_udp@PTR;check="$$(dig +tcp +noall +answer +search 43.61.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.61.43_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8616.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8616.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8616.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8616.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8616.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-8616.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8616.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-8616.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8616.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-8616.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8616.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-8616.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8616.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 43.61.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.61.43_udp@PTR;check="$$(dig +tcp +noall +answer +search 43.61.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.61.43_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Nov 5 23:23:46.875: INFO: Unable to read wheezy_udp@dns-test-service.dns-8616.svc.cluster.local from pod dns-8616/dns-test-cc994bbc-6fb4-4315-badc-f110f8b29c66: the server could not find the requested resource (get pods dns-test-cc994bbc-6fb4-4315-badc-f110f8b29c66) Nov 5 23:23:46.878: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8616.svc.cluster.local from pod dns-8616/dns-test-cc994bbc-6fb4-4315-badc-f110f8b29c66: the server could not find the requested resource (get pods dns-test-cc994bbc-6fb4-4315-badc-f110f8b29c66) Nov 5 23:23:46.880: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8616.svc.cluster.local from pod dns-8616/dns-test-cc994bbc-6fb4-4315-badc-f110f8b29c66: the server could not find the requested resource (get pods dns-test-cc994bbc-6fb4-4315-badc-f110f8b29c66) Nov 5 23:23:46.883: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8616.svc.cluster.local from pod dns-8616/dns-test-cc994bbc-6fb4-4315-badc-f110f8b29c66: the server could not find the requested resource (get pods dns-test-cc994bbc-6fb4-4315-badc-f110f8b29c66) Nov 5 23:23:46.904: INFO: Unable to read jessie_udp@dns-test-service.dns-8616.svc.cluster.local from pod dns-8616/dns-test-cc994bbc-6fb4-4315-badc-f110f8b29c66: the server could not find the requested resource (get pods dns-test-cc994bbc-6fb4-4315-badc-f110f8b29c66) Nov 5 23:23:46.905: INFO: Unable to read jessie_tcp@dns-test-service.dns-8616.svc.cluster.local from pod dns-8616/dns-test-cc994bbc-6fb4-4315-badc-f110f8b29c66: the server could not find the requested resource (get pods dns-test-cc994bbc-6fb4-4315-badc-f110f8b29c66) Nov 5 23:23:46.908: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8616.svc.cluster.local from pod dns-8616/dns-test-cc994bbc-6fb4-4315-badc-f110f8b29c66: the server could not find the requested resource (get pods dns-test-cc994bbc-6fb4-4315-badc-f110f8b29c66) Nov 5 23:23:46.911: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8616.svc.cluster.local from pod dns-8616/dns-test-cc994bbc-6fb4-4315-badc-f110f8b29c66: the server could not find the requested resource (get pods dns-test-cc994bbc-6fb4-4315-badc-f110f8b29c66) Nov 5 23:23:46.925: INFO: Lookups using dns-8616/dns-test-cc994bbc-6fb4-4315-badc-f110f8b29c66 failed for: [wheezy_udp@dns-test-service.dns-8616.svc.cluster.local wheezy_tcp@dns-test-service.dns-8616.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8616.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8616.svc.cluster.local jessie_udp@dns-test-service.dns-8616.svc.cluster.local jessie_tcp@dns-test-service.dns-8616.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8616.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8616.svc.cluster.local] Nov 5 23:23:51.982: INFO: DNS probes using dns-8616/dns-test-cc994bbc-6fb4-4315-badc-f110f8b29c66 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:23:52.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8616" for this suite. • [SLOW TEST:11.201 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":-1,"completed":14,"skipped":288,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:21:31.941: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service nodeport-test with type=NodePort in namespace services-3993 STEP: creating replication controller nodeport-test in namespace services-3993 I1105 23:21:31.976110 34 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-3993, replica count: 2 I1105 23:21:35.028011 34 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1105 23:21:38.028641 34 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1105 23:21:41.029913 34 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1105 23:21:44.031307 34 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 5 23:21:44.031: INFO: Creating new exec pod Nov 5 23:21:51.053: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' Nov 5 23:21:51.309: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n" Nov 5 23:21:51.309: INFO: stdout: "nodeport-test-8zwjp" Nov 5 23:21:51.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.52.49 80' Nov 5 23:21:51.549: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.52.49 80\nConnection to 10.233.52.49 80 port [tcp/http] succeeded!\n" Nov 5 23:21:51.549: INFO: stdout: "nodeport-test-8zwjp" Nov 5 23:21:51.549: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:21:51.792: INFO: rc: 1 Nov 5 23:21:51.792: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:21:52.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:21:53.025: INFO: rc: 1 Nov 5 23:21:53.025: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:21:53.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:21:54.021: INFO: rc: 1 Nov 5 23:21:54.021: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:21:54.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:21:55.040: INFO: rc: 1 Nov 5 23:21:55.040: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:21:55.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:21:56.036: INFO: rc: 1 Nov 5 23:21:56.036: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:21:56.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:21:57.022: INFO: rc: 1 Nov 5 23:21:57.022: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:21:57.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:21:58.027: INFO: rc: 1 Nov 5 23:21:58.027: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:21:58.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:21:59.045: INFO: rc: 1 Nov 5 23:21:59.045: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:21:59.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:22:00.277: INFO: rc: 1 Nov 5 23:22:00.277: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30400 + echo hostName nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:22:00.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:22:01.087: INFO: rc: 1 Nov 5 23:22:01.087: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:22:01.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:22:02.031: INFO: rc: 1 Nov 5 23:22:02.031: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:22:02.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:22:03.043: INFO: rc: 1 Nov 5 23:22:03.043: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:22:03.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:22:04.325: INFO: rc: 1 Nov 5 23:22:04.325: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:22:04.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:22:05.056: INFO: rc: 1 Nov 5 23:22:05.056: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:22:05.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:22:06.052: INFO: rc: 1 Nov 5 23:22:06.052: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:22:06.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:22:07.041: INFO: rc: 1 Nov 5 23:22:07.041: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:22:07.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:22:09.113: INFO: rc: 1 Nov 5 23:22:09.113: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:22:09.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:22:10.378: INFO: rc: 1 Nov 5 23:22:10.379: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:22:10.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:22:11.043: INFO: rc: 1 Nov 5 23:22:11.043: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:22:11.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:22:12.183: INFO: rc: 1 Nov 5 23:22:12.183: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:22:12.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:22:13.121: INFO: rc: 1 Nov 5 23:22:13.121: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:22:13.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:22:14.208: INFO: rc: 1 Nov 5 23:22:14.208: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:22:14.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:22:15.407: INFO: rc: 1 Nov 5 23:22:15.407: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30400 + echo hostName nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:22:15.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:22:16.093: INFO: rc: 1 Nov 5 23:22:16.093: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:22:16.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:22:17.346: INFO: rc: 1 Nov 5 23:22:17.346: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:22:17.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:22:18.046: INFO: rc: 1 Nov 5 23:22:18.046: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:22:18.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:22:19.053: INFO: rc: 1 Nov 5 23:22:19.053: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:22:19.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:22:20.045: INFO: rc: 1 Nov 5 23:22:20.045: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:22:20.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:22:21.039: INFO: rc: 1 Nov 5 23:22:21.039: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:22:21.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:22:22.032: INFO: rc: 1 Nov 5 23:22:22.032: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:22:22.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:22:23.070: INFO: rc: 1 Nov 5 23:22:23.070: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:22:23.792: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:22:24.031: INFO: rc: 1 Nov 5 23:22:24.032: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:22:24.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:22:25.054: INFO: rc: 1 Nov 5 23:22:25.055: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:22:25.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:22:26.170: INFO: rc: 1 Nov 5 23:22:26.170: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:22:26.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:22:27.037: INFO: rc: 1 Nov 5 23:22:27.037: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30400 + echo hostName nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:22:27.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:22:28.052: INFO: rc: 1 Nov 5 23:22:28.052: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + nc -v -t -w+ 2 10.10.190.207 30400echo hostName nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:22:28.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:22:29.053: INFO: rc: 1 Nov 5 23:22:29.053: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:22:29.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:22:30.029: INFO: rc: 1 Nov 5 23:22:30.029: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30400 + echo hostName nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:22:30.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:22:31.043: INFO: rc: 1 Nov 5 23:22:31.043: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:22:31.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:22:32.044: INFO: rc: 1 Nov 5 23:22:32.044: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:22:32.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:22:33.032: INFO: rc: 1 Nov 5 23:22:33.032: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:22:33.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:22:34.066: INFO: rc: 1 Nov 5 23:22:34.066: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:22:34.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:22:35.040: INFO: rc: 1 Nov 5 23:22:35.040: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:22:35.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:22:36.039: INFO: rc: 1 Nov 5 23:22:36.039: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:22:36.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:22:37.065: INFO: rc: 1 Nov 5 23:22:37.065: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:22:37.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:22:38.444: INFO: rc: 1 Nov 5 23:22:38.444: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:22:38.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:22:39.033: INFO: rc: 1 Nov 5 23:22:39.033: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:22:39.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:22:40.049: INFO: rc: 1 Nov 5 23:22:40.049: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:22:40.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:22:41.048: INFO: rc: 1 Nov 5 23:22:41.048: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:22:41.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:22:42.051: INFO: rc: 1 Nov 5 23:22:42.051: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:22:42.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:22:43.060: INFO: rc: 1 Nov 5 23:22:43.061: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:22:43.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:22:44.035: INFO: rc: 1 Nov 5 23:22:44.035: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:22:44.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:22:45.044: INFO: rc: 1 Nov 5 23:22:45.044: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:22:45.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:22:46.041: INFO: rc: 1 Nov 5 23:22:46.041: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:22:46.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:22:47.047: INFO: rc: 1 Nov 5 23:22:47.047: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:22:47.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:22:48.124: INFO: rc: 1 Nov 5 23:22:48.124: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:22:48.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:22:49.106: INFO: rc: 1 Nov 5 23:22:49.106: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:22:49.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:22:50.110: INFO: rc: 1 Nov 5 23:22:50.110: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:22:50.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:22:51.795: INFO: rc: 1 Nov 5 23:22:51.795: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:22:52.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:22:53.074: INFO: rc: 1 Nov 5 23:22:53.075: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:22:53.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:22:54.035: INFO: rc: 1 Nov 5 23:22:54.035: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:22:54.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:22:55.045: INFO: rc: 1 Nov 5 23:22:55.045: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:22:55.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:22:56.570: INFO: rc: 1 Nov 5 23:22:56.570: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:22:56.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:22:57.044: INFO: rc: 1 Nov 5 23:22:57.045: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:22:57.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:22:58.011: INFO: rc: 1 Nov 5 23:22:58.011: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:22:58.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:22:59.066: INFO: rc: 1 Nov 5 23:22:59.066: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:22:59.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:23:00.063: INFO: rc: 1 Nov 5 23:23:00.063: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:23:00.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:23:01.281: INFO: rc: 1 Nov 5 23:23:01.281: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:23:01.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:23:02.062: INFO: rc: 1 Nov 5 23:23:02.062: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:23:02.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:23:03.058: INFO: rc: 1 Nov 5 23:23:03.058: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:23:03.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:23:04.169: INFO: rc: 1 Nov 5 23:23:04.169: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:23:04.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:23:05.069: INFO: rc: 1 Nov 5 23:23:05.069: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:23:05.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:23:06.133: INFO: rc: 1 Nov 5 23:23:06.133: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:23:06.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:23:07.118: INFO: rc: 1 Nov 5 23:23:07.118: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:23:07.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:23:08.678: INFO: rc: 1 Nov 5 23:23:08.678: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:23:08.792: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:23:09.175: INFO: rc: 1 Nov 5 23:23:09.175: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30400 + echo hostName nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:23:09.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:23:10.109: INFO: rc: 1 Nov 5 23:23:10.109: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:23:10.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:23:11.229: INFO: rc: 1 Nov 5 23:23:11.230: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:23:11.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:23:12.021: INFO: rc: 1 Nov 5 23:23:12.021: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:23:12.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:23:13.025: INFO: rc: 1 Nov 5 23:23:13.025: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:23:13.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:23:14.033: INFO: rc: 1 Nov 5 23:23:14.033: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:23:14.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:23:15.024: INFO: rc: 1 Nov 5 23:23:15.024: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:23:15.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:23:16.121: INFO: rc: 1 Nov 5 23:23:16.121: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:23:16.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:23:17.025: INFO: rc: 1 Nov 5 23:23:17.025: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:23:17.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:23:18.117: INFO: rc: 1 Nov 5 23:23:18.117: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:23:18.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:23:19.357: INFO: rc: 1 Nov 5 23:23:19.357: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:23:19.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:23:20.239: INFO: rc: 1 Nov 5 23:23:20.239: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:23:20.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:23:21.087: INFO: rc: 1 Nov 5 23:23:21.087: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:23:21.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:23:22.042: INFO: rc: 1 Nov 5 23:23:22.042: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:23:22.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:23:23.036: INFO: rc: 1 Nov 5 23:23:23.036: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:23:23.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:23:24.070: INFO: rc: 1 Nov 5 23:23:24.070: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:23:24.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:23:25.153: INFO: rc: 1 Nov 5 23:23:25.153: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:23:25.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:23:26.619: INFO: rc: 1 Nov 5 23:23:26.619: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:23:26.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:23:27.025: INFO: rc: 1 Nov 5 23:23:27.025: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:23:27.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:23:28.108: INFO: rc: 1 Nov 5 23:23:28.108: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:23:28.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:23:29.047: INFO: rc: 1 Nov 5 23:23:29.047: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:23:29.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:23:30.031: INFO: rc: 1 Nov 5 23:23:30.031: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:23:30.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:23:31.024: INFO: rc: 1 Nov 5 23:23:31.024: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:23:31.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:23:32.053: INFO: rc: 1 Nov 5 23:23:32.054: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:23:32.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:23:33.022: INFO: rc: 1 Nov 5 23:23:33.022: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30400 + echo hostName nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:23:33.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:23:34.043: INFO: rc: 1 Nov 5 23:23:34.043: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:23:34.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:23:35.088: INFO: rc: 1 Nov 5 23:23:35.088: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:23:35.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:23:36.039: INFO: rc: 1 Nov 5 23:23:36.039: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:23:36.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:23:37.139: INFO: rc: 1 Nov 5 23:23:37.140: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:23:37.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:23:39.057: INFO: rc: 1 Nov 5 23:23:39.058: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30400 + echo hostName nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:23:39.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:23:40.294: INFO: rc: 1 Nov 5 23:23:40.294: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:23:40.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:23:41.284: INFO: rc: 1 Nov 5 23:23:41.284: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:23:41.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:23:42.139: INFO: rc: 1 Nov 5 23:23:42.139: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:23:42.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:23:43.049: INFO: rc: 1 Nov 5 23:23:43.049: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:23:43.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:23:44.051: INFO: rc: 1 Nov 5 23:23:44.051: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:23:44.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:23:45.031: INFO: rc: 1 Nov 5 23:23:45.031: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:23:45.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:23:46.687: INFO: rc: 1 Nov 5 23:23:46.687: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:23:46.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:23:47.131: INFO: rc: 1 Nov 5 23:23:47.131: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:23:47.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:23:48.052: INFO: rc: 1 Nov 5 23:23:48.052: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:23:48.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:23:49.031: INFO: rc: 1 Nov 5 23:23:49.032: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:23:49.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:23:50.100: INFO: rc: 1 Nov 5 23:23:50.100: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:23:50.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:23:51.031: INFO: rc: 1 Nov 5 23:23:51.031: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:23:51.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:23:52.300: INFO: rc: 1 Nov 5 23:23:52.301: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:23:52.301: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400' Nov 5 23:23:52.592: INFO: rc: 1 Nov 5 23:23:52.592: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3993 exec execpodjbf9w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30400: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30400 nc: connect to 10.10.190.207 port 30400 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:23:52.593: FAIL: Unexpected error: <*errors.errorString | 0xc00404c730>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30400 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30400 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func24.11() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1169 +0x265 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001803b00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc001803b00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc001803b00, 0x70e7b58) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-3993". STEP: Found 17 events. Nov 5 23:23:52.609: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpodjbf9w: { } Scheduled: Successfully assigned services-3993/execpodjbf9w to node1 Nov 5 23:23:52.609: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for nodeport-test-8zwjp: { } Scheduled: Successfully assigned services-3993/nodeport-test-8zwjp to node2 Nov 5 23:23:52.609: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for nodeport-test-928bk: { } Scheduled: Successfully assigned services-3993/nodeport-test-928bk to node2 Nov 5 23:23:52.609: INFO: At 2021-11-05 23:21:31 +0000 UTC - event for nodeport-test: {replication-controller } SuccessfulCreate: Created pod: nodeport-test-928bk Nov 5 23:23:52.609: INFO: At 2021-11-05 23:21:31 +0000 UTC - event for nodeport-test: {replication-controller } SuccessfulCreate: Created pod: nodeport-test-8zwjp Nov 5 23:23:52.609: INFO: At 2021-11-05 23:21:37 +0000 UTC - event for nodeport-test-928bk: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Nov 5 23:23:52.609: INFO: At 2021-11-05 23:21:38 +0000 UTC - event for nodeport-test-8zwjp: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Nov 5 23:23:52.609: INFO: At 2021-11-05 23:21:38 +0000 UTC - event for nodeport-test-928bk: {kubelet node2} Created: Created container nodeport-test Nov 5 23:23:52.609: INFO: At 2021-11-05 23:21:38 +0000 UTC - event for nodeport-test-928bk: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 939.795033ms Nov 5 23:23:52.609: INFO: At 2021-11-05 23:21:39 +0000 UTC - event for nodeport-test-8zwjp: {kubelet node2} Started: Started container nodeport-test Nov 5 23:23:52.609: INFO: At 2021-11-05 23:21:39 +0000 UTC - event for nodeport-test-8zwjp: {kubelet node2} Created: Created container nodeport-test Nov 5 23:23:52.609: INFO: At 2021-11-05 23:21:39 +0000 UTC - event for nodeport-test-8zwjp: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 616.484068ms Nov 5 23:23:52.609: INFO: At 2021-11-05 23:21:39 +0000 UTC - event for nodeport-test-928bk: {kubelet node2} Started: Started container nodeport-test Nov 5 23:23:52.609: INFO: At 2021-11-05 23:21:47 +0000 UTC - event for execpodjbf9w: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Nov 5 23:23:52.609: INFO: At 2021-11-05 23:21:48 +0000 UTC - event for execpodjbf9w: {kubelet node1} Started: Started container agnhost-container Nov 5 23:23:52.609: INFO: At 2021-11-05 23:21:48 +0000 UTC - event for execpodjbf9w: {kubelet node1} Created: Created container agnhost-container Nov 5 23:23:52.609: INFO: At 2021-11-05 23:21:48 +0000 UTC - event for execpodjbf9w: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 446.489268ms Nov 5 23:23:52.612: INFO: POD NODE PHASE GRACE CONDITIONS Nov 5 23:23:52.612: INFO: execpodjbf9w node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:21:44 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:21:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:21:48 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:21:44 +0000 UTC }] Nov 5 23:23:52.612: INFO: nodeport-test-8zwjp node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:21:31 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:21:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:21:40 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:21:31 +0000 UTC }] Nov 5 23:23:52.612: INFO: nodeport-test-928bk node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:21:31 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:21:39 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:21:39 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:21:31 +0000 UTC }] Nov 5 23:23:52.612: INFO: Nov 5 23:23:52.617: INFO: Logging node info for node master1 Nov 5 23:23:52.619: INFO: Node Info: &Node{ObjectMeta:{master1 acabf68f-e6fa-4376-87a7-953399a106b3 40729 0 2021-11-05 20:58:52 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-11-05 20:58:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-11-05 21:01:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-11-05 21:01:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-11-05 21:06:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-05 21:04:29 +0000 UTC,LastTransitionTime:2021-11-05 21:04:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-05 23:23:48 +0000 UTC,LastTransitionTime:2021-11-05 20:58:50 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-05 23:23:48 +0000 UTC,LastTransitionTime:2021-11-05 20:58:50 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-05 23:23:48 +0000 UTC,LastTransitionTime:2021-11-05 20:58:50 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-05 23:23:48 +0000 UTC,LastTransitionTime:2021-11-05 21:01:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b66bbe4d404942179ce344aa1da0c494,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:b59c0f0e-9c14-460c-acfa-6e83037bd04e,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724468800,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:50c0d65dc6b2d7618e01dd2fb431c2312ad7490d0461d040b112b04809b07401 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:2012644549052fa07c43b0d19f320c871a25e105d0b23e33645e4f1bcf8fcd97 nginx:1.20.1-alpine],SizeBytes:22650454,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 5 23:23:52.620: INFO: Logging kubelet events for node master1 Nov 5 23:23:52.623: INFO: Logging pods the kubelet thinks is on node master1 Nov 5 23:23:52.656: INFO: kube-proxy-r4cf7 started at 2021-11-05 21:00:42 +0000 UTC (0+1 container statuses recorded) Nov 5 23:23:52.656: INFO: Container kube-proxy ready: true, restart count 1 Nov 5 23:23:52.656: INFO: kube-multus-ds-amd64-rr699 started at 2021-11-05 21:01:44 +0000 UTC (0+1 container statuses recorded) Nov 5 23:23:52.656: INFO: Container kube-multus ready: true, restart count 1 Nov 5 23:23:52.657: INFO: container-registry-65d7c44b96-dwrs5 started at 2021-11-05 21:06:01 +0000 UTC (0+2 container statuses recorded) Nov 5 23:23:52.657: INFO: Container docker-registry ready: true, restart count 0 Nov 5 23:23:52.657: INFO: Container nginx ready: true, restart count 0 Nov 5 23:23:52.657: INFO: kube-apiserver-master1 started at 2021-11-05 21:00:02 +0000 UTC (0+1 container statuses recorded) Nov 5 23:23:52.657: INFO: Container kube-apiserver ready: true, restart count 0 Nov 5 23:23:52.657: INFO: kube-controller-manager-master1 started at 2021-11-05 21:00:02 +0000 UTC (0+1 container statuses recorded) Nov 5 23:23:52.657: INFO: Container kube-controller-manager ready: true, restart count 3 Nov 5 23:23:52.657: INFO: kube-scheduler-master1 started at 2021-11-05 21:00:02 +0000 UTC (0+1 container statuses recorded) Nov 5 23:23:52.657: INFO: Container kube-scheduler ready: true, restart count 0 Nov 5 23:23:52.657: INFO: kube-flannel-hkkhj started at 2021-11-05 21:01:36 +0000 UTC (1+1 container statuses recorded) Nov 5 23:23:52.657: INFO: Init container install-cni ready: true, restart count 2 Nov 5 23:23:52.657: INFO: Container kube-flannel ready: true, restart count 2 Nov 5 23:23:52.657: INFO: coredns-8474476ff8-nq2jw started at 2021-11-05 21:02:14 +0000 UTC (0+1 container statuses recorded) Nov 5 23:23:52.657: INFO: Container coredns ready: true, restart count 2 Nov 5 23:23:52.657: INFO: node-exporter-lgdzv started at 2021-11-05 21:14:48 +0000 UTC (0+2 container statuses recorded) Nov 5 23:23:52.657: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 5 23:23:52.657: INFO: Container node-exporter ready: true, restart count 0 W1105 23:23:52.668627 34 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 5 23:23:52.734: INFO: Latency metrics for node master1 Nov 5 23:23:52.734: INFO: Logging node info for node master2 Nov 5 23:23:52.743: INFO: Node Info: &Node{ObjectMeta:{master2 004d4571-8588-4d18-93d0-ad0af4174866 40755 0 2021-11-05 20:59:23 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-11-05 20:59:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2021-11-05 21:00:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2021-11-05 21:01:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-11-05 21:09:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2021-11-05 21:09:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-05 21:04:41 +0000 UTC,LastTransitionTime:2021-11-05 21:04:41 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-05 23:23:50 +0000 UTC,LastTransitionTime:2021-11-05 20:59:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-05 23:23:50 +0000 UTC,LastTransitionTime:2021-11-05 20:59:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-05 23:23:50 +0000 UTC,LastTransitionTime:2021-11-05 20:59:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-05 23:23:50 +0000 UTC,LastTransitionTime:2021-11-05 21:01:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:0f1bc4b4acc1463992265eb9f006d2f4,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:d0e797a3-7d35-4e63-b584-b18006ef67fe,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724468800,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 5 23:23:52.744: INFO: Logging kubelet events for node master2 Nov 5 23:23:52.747: INFO: Logging pods the kubelet thinks is on node master2 Nov 5 23:23:52.766: INFO: kube-controller-manager-master2 started at 2021-11-05 21:04:18 +0000 UTC (0+1 container statuses recorded) Nov 5 23:23:52.766: INFO: Container kube-controller-manager ready: true, restart count 2 Nov 5 23:23:52.766: INFO: kube-proxy-9vm9v started at 2021-11-05 21:00:42 +0000 UTC (0+1 container statuses recorded) Nov 5 23:23:52.766: INFO: Container kube-proxy ready: true, restart count 1 Nov 5 23:23:52.766: INFO: kube-flannel-g7q4k started at 2021-11-05 21:01:36 +0000 UTC (1+1 container statuses recorded) Nov 5 23:23:52.766: INFO: Init container install-cni ready: true, restart count 0 Nov 5 23:23:52.766: INFO: Container kube-flannel ready: true, restart count 3 Nov 5 23:23:52.766: INFO: kube-apiserver-master2 started at 2021-11-05 21:00:02 +0000 UTC (0+1 container statuses recorded) Nov 5 23:23:52.766: INFO: Container kube-apiserver ready: true, restart count 0 Nov 5 23:23:52.766: INFO: kube-scheduler-master2 started at 2021-11-05 21:08:18 +0000 UTC (0+1 container statuses recorded) Nov 5 23:23:52.766: INFO: Container kube-scheduler ready: true, restart count 3 Nov 5 23:23:52.766: INFO: kube-multus-ds-amd64-m5646 started at 2021-11-05 21:01:44 +0000 UTC (0+1 container statuses recorded) Nov 5 23:23:52.766: INFO: Container kube-multus ready: true, restart count 1 Nov 5 23:23:52.766: INFO: node-feature-discovery-controller-cff799f9f-8cg9j started at 2021-11-05 21:09:34 +0000 UTC (0+1 container statuses recorded) Nov 5 23:23:52.766: INFO: Container nfd-controller ready: true, restart count 0 Nov 5 23:23:52.766: INFO: node-exporter-8mxjv started at 2021-11-05 21:14:48 +0000 UTC (0+2 container statuses recorded) Nov 5 23:23:52.766: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 5 23:23:52.766: INFO: Container node-exporter ready: true, restart count 0 W1105 23:23:52.780671 34 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 5 23:23:52.847: INFO: Latency metrics for node master2 Nov 5 23:23:52.847: INFO: Logging node info for node master3 Nov 5 23:23:52.849: INFO: Node Info: &Node{ObjectMeta:{master3 d3395dfc-1d8f-4527-88b4-7f472f6a6c0f 40630 0 2021-11-05 20:59:34 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-11-05 20:59:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-11-05 21:01:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-11-05 21:01:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-11-05 21:12:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-05 21:04:26 +0000 UTC,LastTransitionTime:2021-11-05 21:04:26 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-05 23:23:44 +0000 UTC,LastTransitionTime:2021-11-05 20:59:34 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-05 23:23:44 +0000 UTC,LastTransitionTime:2021-11-05 20:59:34 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-05 23:23:44 +0000 UTC,LastTransitionTime:2021-11-05 20:59:34 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-05 23:23:44 +0000 UTC,LastTransitionTime:2021-11-05 21:04:19 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:006015d4e2a7441aa293fbb9db938e38,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:a0f65291-184f-4994-a7ea-d1a5b4d71ffa,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724468800,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 5 23:23:52.850: INFO: Logging kubelet events for node master3 Nov 5 23:23:52.852: INFO: Logging pods the kubelet thinks is on node master3 Nov 5 23:23:52.865: INFO: kube-scheduler-master3 started at 2021-11-05 21:08:19 +0000 UTC (0+1 container statuses recorded) Nov 5 23:23:52.865: INFO: Container kube-scheduler ready: true, restart count 3 Nov 5 23:23:52.865: INFO: kube-proxy-s2pzt started at 2021-11-05 21:00:42 +0000 UTC (0+1 container statuses recorded) Nov 5 23:23:52.865: INFO: Container kube-proxy ready: true, restart count 2 Nov 5 23:23:52.865: INFO: kube-multus-ds-amd64-cp25f started at 2021-11-05 21:01:44 +0000 UTC (0+1 container statuses recorded) Nov 5 23:23:52.865: INFO: Container kube-multus ready: true, restart count 1 Nov 5 23:23:52.865: INFO: dns-autoscaler-7df78bfcfb-z9dxm started at 2021-11-05 21:02:12 +0000 UTC (0+1 container statuses recorded) Nov 5 23:23:52.865: INFO: Container autoscaler ready: true, restart count 1 Nov 5 23:23:52.865: INFO: node-exporter-mqcvx started at 2021-11-05 21:14:48 +0000 UTC (0+2 container statuses recorded) Nov 5 23:23:52.865: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 5 23:23:52.865: INFO: Container node-exporter ready: true, restart count 0 Nov 5 23:23:52.865: INFO: kube-apiserver-master3 started at 2021-11-05 21:04:19 +0000 UTC (0+1 container statuses recorded) Nov 5 23:23:52.865: INFO: Container kube-apiserver ready: true, restart count 0 Nov 5 23:23:52.865: INFO: kube-controller-manager-master3 started at 2021-11-05 21:00:02 +0000 UTC (0+1 container statuses recorded) Nov 5 23:23:52.865: INFO: Container kube-controller-manager ready: true, restart count 2 Nov 5 23:23:52.865: INFO: kube-flannel-f55xz started at 2021-11-05 21:01:36 +0000 UTC (1+1 container statuses recorded) Nov 5 23:23:52.865: INFO: Init container install-cni ready: true, restart count 0 Nov 5 23:23:52.865: INFO: Container kube-flannel ready: true, restart count 1 Nov 5 23:23:52.865: INFO: coredns-8474476ff8-qbn9j started at 2021-11-05 21:02:10 +0000 UTC (0+1 container statuses recorded) Nov 5 23:23:52.865: INFO: Container coredns ready: true, restart count 1 W1105 23:23:52.877106 34 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 5 23:23:52.953: INFO: Latency metrics for node master3 Nov 5 23:23:52.953: INFO: Logging node info for node node1 Nov 5 23:23:52.955: INFO: Node Info: &Node{ObjectMeta:{node1 290b18e7-da33-4da8-b78a-8a7f28c49abf 40790 0 2021-11-05 21:00:39 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-11-05 21:00:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-11-05 21:00:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-11-05 21:01:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-11-05 21:09:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-11-05 21:13:07 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-11-05 21:13:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-05 21:04:40 +0000 UTC,LastTransitionTime:2021-11-05 21:04:40 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-05 23:23:52 +0000 UTC,LastTransitionTime:2021-11-05 21:00:39 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-05 23:23:52 +0000 UTC,LastTransitionTime:2021-11-05 21:00:39 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-05 23:23:52 +0000 UTC,LastTransitionTime:2021-11-05 21:00:39 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-05 23:23:52 +0000 UTC,LastTransitionTime:2021-11-05 21:01:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7f2fc144f1734ec29780a435d0602675,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:7c24c54c-15ba-4c20-b196-32ad0c82be71,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003432896,},ContainerImage{Names:[localhost:30500/cmk@sha256:8c85a99f0d621f4580064e48909402b99a2e98aff1c1e4164e4e52d46568c5be cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724468800,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:e30d246f33998514fb41fc34b11174e739efb54af715b53723b81cc5d2ebd29d nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[localhost:30500/tasextender@sha256:50c0d65dc6b2d7618e01dd2fb431c2312ad7490d0461d040b112b04809b07401 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:a296b4c6f6ee2b88f095b61e95c7ef4f51ba25598835b4978c9256d8c8ace48a alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 5 23:23:52.956: INFO: Logging kubelet events for node node1 Nov 5 23:23:52.958: INFO: Logging pods the kubelet thinks is on node node1 Nov 5 23:23:52.973: INFO: externalname-service-m64cs started at 2021-11-05 23:23:44 +0000 UTC (0+1 container statuses recorded) Nov 5 23:23:52.973: INFO: Container externalname-service ready: true, restart count 0 Nov 5 23:23:52.973: INFO: node-feature-discovery-worker-spmbf started at 2021-11-05 21:09:34 +0000 UTC (0+1 container statuses recorded) Nov 5 23:23:52.973: INFO: Container nfd-worker ready: true, restart count 0 Nov 5 23:23:52.973: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-l4npn started at 2021-11-05 21:10:45 +0000 UTC (0+1 container statuses recorded) Nov 5 23:23:52.973: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 5 23:23:52.973: INFO: ss2-1 started at 2021-11-05 23:23:27 +0000 UTC (0+1 container statuses recorded) Nov 5 23:23:52.973: INFO: Container webserver ready: false, restart count 0 Nov 5 23:23:52.973: INFO: pod-0 started at 2021-11-05 23:23:38 +0000 UTC (0+1 container statuses recorded) Nov 5 23:23:52.973: INFO: Container donothing ready: false, restart count 0 Nov 5 23:23:52.973: INFO: kube-proxy-mc4cs started at 2021-11-05 21:00:42 +0000 UTC (0+1 container statuses recorded) Nov 5 23:23:52.973: INFO: Container kube-proxy ready: true, restart count 2 Nov 5 23:23:52.973: INFO: cmk-cfm9r started at 2021-11-05 21:13:47 +0000 UTC (0+2 container statuses recorded) Nov 5 23:23:52.973: INFO: Container nodereport ready: true, restart count 0 Nov 5 23:23:52.973: INFO: Container reconcile ready: true, restart count 0 Nov 5 23:23:52.973: INFO: prometheus-k8s-0 started at 2021-11-05 21:14:58 +0000 UTC (0+4 container statuses recorded) Nov 5 23:23:52.973: INFO: Container config-reloader ready: true, restart count 0 Nov 5 23:23:52.973: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Nov 5 23:23:52.973: INFO: Container grafana ready: true, restart count 0 Nov 5 23:23:52.973: INFO: Container prometheus ready: true, restart count 1 Nov 5 23:23:52.973: INFO: test-pod started at 2021-11-05 23:23:17 +0000 UTC (0+1 container statuses recorded) Nov 5 23:23:52.973: INFO: Container webserver ready: true, restart count 0 Nov 5 23:23:52.973: INFO: kube-flannel-hxwks started at 2021-11-05 21:01:36 +0000 UTC (1+1 container statuses recorded) Nov 5 23:23:52.973: INFO: Init container install-cni ready: true, restart count 2 Nov 5 23:23:52.973: INFO: Container kube-flannel ready: true, restart count 3 Nov 5 23:23:52.973: INFO: cmk-init-discover-node1-nnkks started at 2021-11-05 21:13:04 +0000 UTC (0+3 container statuses recorded) Nov 5 23:23:52.973: INFO: Container discover ready: false, restart count 0 Nov 5 23:23:52.973: INFO: Container init ready: false, restart count 0 Nov 5 23:23:52.973: INFO: Container install ready: false, restart count 0 Nov 5 23:23:52.973: INFO: cmk-webhook-6c9d5f8578-wq5mk started at 2021-11-05 21:13:47 +0000 UTC (0+1 container statuses recorded) Nov 5 23:23:52.973: INFO: Container cmk-webhook ready: true, restart count 0 Nov 5 23:23:52.973: INFO: node-exporter-fvksz started at 2021-11-05 21:14:48 +0000 UTC (0+2 container statuses recorded) Nov 5 23:23:52.973: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 5 23:23:52.973: INFO: Container node-exporter ready: true, restart count 0 Nov 5 23:23:52.973: INFO: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s started at 2021-11-05 21:17:51 +0000 UTC (0+1 container statuses recorded) Nov 5 23:23:52.973: INFO: Container tas-extender ready: true, restart count 0 Nov 5 23:23:52.974: INFO: collectd-5k6s9 started at 2021-11-05 21:18:40 +0000 UTC (0+3 container statuses recorded) Nov 5 23:23:52.974: INFO: Container collectd ready: true, restart count 0 Nov 5 23:23:52.974: INFO: Container collectd-exporter ready: true, restart count 0 Nov 5 23:23:52.974: INFO: Container rbac-proxy ready: true, restart count 0 Nov 5 23:23:52.974: INFO: nginx-proxy-node1 started at 2021-11-05 21:00:39 +0000 UTC (0+1 container statuses recorded) Nov 5 23:23:52.974: INFO: Container nginx-proxy ready: true, restart count 2 Nov 5 23:23:52.974: INFO: kube-multus-ds-amd64-mqrl8 started at 2021-11-05 21:01:44 +0000 UTC (0+1 container statuses recorded) Nov 5 23:23:52.974: INFO: Container kube-multus ready: true, restart count 1 Nov 5 23:23:52.974: INFO: kubernetes-dashboard-785dcbb76d-9wtdz started at 2021-11-05 21:02:14 +0000 UTC (0+1 container statuses recorded) Nov 5 23:23:52.974: INFO: Container kubernetes-dashboard ready: true, restart count 1 Nov 5 23:23:52.974: INFO: execpod9nkst started at 2021-11-05 23:23:50 +0000 UTC (0+1 container statuses recorded) Nov 5 23:23:52.974: INFO: Container agnhost-container ready: false, restart count 0 Nov 5 23:23:52.974: INFO: execpodjbf9w started at 2021-11-05 23:21:44 +0000 UTC (0+1 container statuses recorded) Nov 5 23:23:52.974: INFO: Container agnhost-container ready: true, restart count 0 Nov 5 23:23:52.974: INFO: externalname-service-7kx6r started at 2021-11-05 23:23:44 +0000 UTC (0+1 container statuses recorded) Nov 5 23:23:52.974: INFO: Container externalname-service ready: true, restart count 0 W1105 23:23:52.988928 34 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 5 23:23:53.384: INFO: Latency metrics for node node1 Nov 5 23:23:53.384: INFO: Logging node info for node node2 Nov 5 23:23:53.387: INFO: Node Info: &Node{ObjectMeta:{node2 7d7e71f0-82d7-49ba-b69a-56600dd59b3f 40757 0 2021-11-05 21:00:39 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-11-05 21:00:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-11-05 21:00:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-11-05 21:01:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-11-05 21:09:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-11-05 21:13:29 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-11-05 21:13:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-05 21:04:43 +0000 UTC,LastTransitionTime:2021-11-05 21:04:43 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-05 23:23:50 +0000 UTC,LastTransitionTime:2021-11-05 21:00:39 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-05 23:23:50 +0000 UTC,LastTransitionTime:2021-11-05 21:00:39 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-05 23:23:50 +0000 UTC,LastTransitionTime:2021-11-05 21:00:39 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-05 23:23:50 +0000 UTC,LastTransitionTime:2021-11-05 21:01:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:415d65c0f8484c488059b324e675b5bd,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:c5482a76-3a9a-45bb-ab12-c74550bfe71f,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:724468800,},ContainerImage{Names:[localhost:30500/cmk@sha256:8c85a99f0d621f4580064e48909402b99a2e98aff1c1e4164e4e52d46568c5be localhost:30500/cmk:v1.5.1],SizeBytes:724468800,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:e30d246f33998514fb41fc34b11174e739efb54af715b53723b81cc5d2ebd29d localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 5 23:23:53.389: INFO: Logging kubelet events for node node2 Nov 5 23:23:53.391: INFO: Logging pods the kubelet thinks is on node node2 Nov 5 23:23:53.406: INFO: kubernetes-metrics-scraper-5558854cb-v9vgg started at 2021-11-05 21:02:14 +0000 UTC (0+1 container statuses recorded) Nov 5 23:23:53.406: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Nov 5 23:23:53.406: INFO: kube-flannel-cqj7j started at 2021-11-05 21:01:36 +0000 UTC (1+1 container statuses recorded) Nov 5 23:23:53.406: INFO: Init container install-cni ready: true, restart count 1 Nov 5 23:23:53.406: INFO: Container kube-flannel ready: true, restart count 2 Nov 5 23:23:53.406: INFO: kube-proxy-j9lmg started at 2021-11-05 21:00:42 +0000 UTC (0+1 container statuses recorded) Nov 5 23:23:53.406: INFO: Container kube-proxy ready: true, restart count 2 Nov 5 23:23:53.406: INFO: forbid-27269242-n79sc started at 2021-11-05 23:22:00 +0000 UTC (0+1 container statuses recorded) Nov 5 23:23:53.406: INFO: Container c ready: true, restart count 0 Nov 5 23:23:53.406: INFO: ss2-0 started at 2021-11-05 23:23:24 +0000 UTC (0+1 container statuses recorded) Nov 5 23:23:53.406: INFO: Container webserver ready: true, restart count 0 Nov 5 23:23:53.406: INFO: ss2-2 started at 2021-11-05 23:23:29 +0000 UTC (0+1 container statuses recorded) Nov 5 23:23:53.406: INFO: Container webserver ready: true, restart count 0 Nov 5 23:23:53.406: INFO: kube-multus-ds-amd64-p7bxx started at 2021-11-05 21:01:44 +0000 UTC (0+1 container statuses recorded) Nov 5 23:23:53.406: INFO: Container kube-multus ready: true, restart count 1 Nov 5 23:23:53.406: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-tzh4p started at 2021-11-05 21:10:45 +0000 UTC (0+1 container statuses recorded) Nov 5 23:23:53.406: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 5 23:23:53.406: INFO: collectd-r2g57 started at 2021-11-05 21:18:40 +0000 UTC (0+3 container statuses recorded) Nov 5 23:23:53.406: INFO: Container collectd ready: true, restart count 0 Nov 5 23:23:53.406: INFO: Container collectd-exporter ready: true, restart count 0 Nov 5 23:23:53.406: INFO: Container rbac-proxy ready: true, restart count 0 Nov 5 23:23:53.406: INFO: nodeport-test-8zwjp started at 2021-11-05 23:21:31 +0000 UTC (0+1 container statuses recorded) Nov 5 23:23:53.406: INFO: Container nodeport-test ready: true, restart count 0 Nov 5 23:23:53.406: INFO: cmk-bnvd2 started at 2021-11-05 21:13:46 +0000 UTC (0+2 container statuses recorded) Nov 5 23:23:53.406: INFO: Container nodereport ready: true, restart count 0 Nov 5 23:23:53.406: INFO: Container reconcile ready: true, restart count 0 Nov 5 23:23:53.406: INFO: prometheus-operator-585ccfb458-vh55q started at 2021-11-05 21:14:41 +0000 UTC (0+2 container statuses recorded) Nov 5 23:23:53.406: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 5 23:23:53.406: INFO: Container prometheus-operator ready: true, restart count 0 Nov 5 23:23:53.406: INFO: pod-secrets-6f6644c2-e5e3-4181-be75-44774c531388 started at 2021-11-05 23:23:10 +0000 UTC (0+3 container statuses recorded) Nov 5 23:23:53.406: INFO: Container creates-volume-test ready: true, restart count 0 Nov 5 23:23:53.406: INFO: Container dels-volume-test ready: true, restart count 0 Nov 5 23:23:53.406: INFO: Container upds-volume-test ready: true, restart count 0 Nov 5 23:23:53.406: INFO: nodeport-test-928bk started at 2021-11-05 23:21:31 +0000 UTC (0+1 container statuses recorded) Nov 5 23:23:53.406: INFO: Container nodeport-test ready: true, restart count 0 Nov 5 23:23:53.406: INFO: pod-service-account-defaultsa-nomountspec started at 2021-11-05 23:22:48 +0000 UTC (0+1 container statuses recorded) Nov 5 23:23:53.406: INFO: Container token-test ready: true, restart count 0 Nov 5 23:23:53.406: INFO: pod-service-account-nomountsa-nomountspec started at 2021-11-05 23:22:48 +0000 UTC (0+1 container statuses recorded) Nov 5 23:23:53.406: INFO: Container token-test ready: true, restart count 0 Nov 5 23:23:53.406: INFO: nginx-proxy-node2 started at 2021-11-05 21:00:39 +0000 UTC (0+1 container statuses recorded) Nov 5 23:23:53.406: INFO: Container nginx-proxy ready: true, restart count 2 Nov 5 23:23:53.406: INFO: node-feature-discovery-worker-pn6cr started at 2021-11-05 21:09:34 +0000 UTC (0+1 container statuses recorded) Nov 5 23:23:53.406: INFO: Container nfd-worker ready: true, restart count 0 Nov 5 23:23:53.406: INFO: cmk-init-discover-node2-9svdd started at 2021-11-05 21:13:24 +0000 UTC (0+3 container statuses recorded) Nov 5 23:23:53.406: INFO: Container discover ready: false, restart count 0 Nov 5 23:23:53.406: INFO: Container init ready: false, restart count 0 Nov 5 23:23:53.406: INFO: Container install ready: false, restart count 0 Nov 5 23:23:53.406: INFO: node-exporter-k7p79 started at 2021-11-05 21:14:48 +0000 UTC (0+2 container statuses recorded) Nov 5 23:23:53.406: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 5 23:23:53.406: INFO: Container node-exporter ready: true, restart count 0 Nov 5 23:23:53.406: INFO: concurrent-27269243-7qztg started at 2021-11-05 23:23:00 +0000 UTC (0+1 container statuses recorded) Nov 5 23:23:53.406: INFO: Container c ready: true, restart count 0 W1105 23:23:53.417811 34 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 5 23:23:53.695: INFO: Latency metrics for node node2 Nov 5 23:23:53.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3993" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [141.765 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to create a functioning NodePort service [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 5 23:23:52.593: Unexpected error: <*errors.errorString | 0xc00404c730>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30400 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30400 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1169 ------------------------------ {"msg":"FAILED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":-1,"completed":2,"skipped":33,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSS ------------------------------ [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:23:53.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:23:55.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-8745" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","total":-1,"completed":3,"skipped":37,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSS ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":214,"failed":0} [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:22:32.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W1105 23:22:32.238004 37 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should schedule multiple jobs concurrently [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a cronjob STEP: Ensuring more than one job is running at a time STEP: Ensuring at least two running jobs exists by listing jobs explicitly STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:24:00.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-51" for this suite. • [SLOW TEST:88.042 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should schedule multiple jobs concurrently [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","total":-1,"completed":13,"skipped":214,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:23:55.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Nov 5 23:23:56.060: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Nov 5 23:23:58.067: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751436, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751436, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751436, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751436, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 5 23:24:01.078: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:24:01.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-992" for this suite. STEP: Destroying namespace "webhook-992-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.368 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:23:52.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 5 23:23:52.056: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Nov 5 23:24:00.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5501 --namespace=crd-publish-openapi-5501 create -f -' Nov 5 23:24:01.127: INFO: stderr: "" Nov 5 23:24:01.127: INFO: stdout: "e2e-test-crd-publish-openapi-6711-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Nov 5 23:24:01.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5501 --namespace=crd-publish-openapi-5501 delete e2e-test-crd-publish-openapi-6711-crds test-cr' Nov 5 23:24:01.289: INFO: stderr: "" Nov 5 23:24:01.289: INFO: stdout: "e2e-test-crd-publish-openapi-6711-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Nov 5 23:24:01.289: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5501 --namespace=crd-publish-openapi-5501 apply -f -' Nov 5 23:24:01.661: INFO: stderr: "" Nov 5 23:24:01.661: INFO: stdout: "e2e-test-crd-publish-openapi-6711-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Nov 5 23:24:01.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5501 --namespace=crd-publish-openapi-5501 delete e2e-test-crd-publish-openapi-6711-crds test-cr' Nov 5 23:24:01.834: INFO: stderr: "" Nov 5 23:24:01.834: INFO: stdout: "e2e-test-crd-publish-openapi-6711-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Nov 5 23:24:01.834: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5501 explain e2e-test-crd-publish-openapi-6711-crds' Nov 5 23:24:02.178: INFO: stderr: "" Nov 5 23:24:02.178: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6711-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<>\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:24:05.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5501" for this suite. • [SLOW TEST:13.696 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":-1,"completed":15,"skipped":299,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:24:05.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename certificates STEP: Waiting for a default service account to be provisioned in namespace [It] should support CSR API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/certificates.k8s.io STEP: getting /apis/certificates.k8s.io/v1 STEP: creating STEP: getting STEP: listing STEP: watching Nov 5 23:24:06.405: INFO: starting watch STEP: patching STEP: updating Nov 5 23:24:06.411: INFO: waiting for watch events with expected annotations Nov 5 23:24:06.412: INFO: saw patched and updated annotations STEP: getting /approval STEP: patching /approval STEP: updating /approval STEP: getting /status STEP: patching /status STEP: updating /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:24:06.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "certificates-74" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":-1,"completed":16,"skipped":328,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:24:00.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Nov 5 23:24:00.303: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ed82363a-73c4-498e-ba68-dd4a499d4b3a" in namespace "projected-5711" to be "Succeeded or Failed" Nov 5 23:24:00.306: INFO: Pod "downwardapi-volume-ed82363a-73c4-498e-ba68-dd4a499d4b3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.694516ms Nov 5 23:24:02.309: INFO: Pod "downwardapi-volume-ed82363a-73c4-498e-ba68-dd4a499d4b3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005654267s Nov 5 23:24:04.313: INFO: Pod "downwardapi-volume-ed82363a-73c4-498e-ba68-dd4a499d4b3a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009333648s Nov 5 23:24:06.319: INFO: Pod "downwardapi-volume-ed82363a-73c4-498e-ba68-dd4a499d4b3a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015134316s STEP: Saw pod success Nov 5 23:24:06.319: INFO: Pod "downwardapi-volume-ed82363a-73c4-498e-ba68-dd4a499d4b3a" satisfied condition "Succeeded or Failed" Nov 5 23:24:06.321: INFO: Trying to get logs from node node2 pod downwardapi-volume-ed82363a-73c4-498e-ba68-dd4a499d4b3a container client-container: STEP: delete the pod Nov 5 23:24:06.483: INFO: Waiting for pod downwardapi-volume-ed82363a-73c4-498e-ba68-dd4a499d4b3a to disappear Nov 5 23:24:06.485: INFO: Pod downwardapi-volume-ed82363a-73c4-498e-ba68-dd4a499d4b3a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:24:06.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5711" for this suite. • [SLOW TEST:6.231 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":219,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":-1,"completed":4,"skipped":41,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:24:01.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 5 23:24:01.198: INFO: The status of Pod busybox-scheduling-279c0b37-f9de-4b01-b455-0a869be6a7ea is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:24:03.201: INFO: The status of Pod busybox-scheduling-279c0b37-f9de-4b01-b455-0a869be6a7ea is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:24:05.201: INFO: The status of Pod busybox-scheduling-279c0b37-f9de-4b01-b455-0a869be6a7ea is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:24:07.204: INFO: The status of Pod busybox-scheduling-279c0b37-f9de-4b01-b455-0a869be6a7ea is Running (Ready = true) [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:24:07.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4993" for this suite. • [SLOW TEST:6.082 seconds] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when scheduling a busybox command in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:41 should print the output to logs [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":41,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:24:06.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0777 on node default medium Nov 5 23:24:06.577: INFO: Waiting up to 5m0s for pod "pod-0510d783-3f3d-4e43-b7f8-1a697867feb6" in namespace "emptydir-6465" to be "Succeeded or Failed" Nov 5 23:24:06.580: INFO: Pod "pod-0510d783-3f3d-4e43-b7f8-1a697867feb6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.381253ms Nov 5 23:24:08.583: INFO: Pod "pod-0510d783-3f3d-4e43-b7f8-1a697867feb6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005731859s Nov 5 23:24:10.587: INFO: Pod "pod-0510d783-3f3d-4e43-b7f8-1a697867feb6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009092366s STEP: Saw pod success Nov 5 23:24:10.587: INFO: Pod "pod-0510d783-3f3d-4e43-b7f8-1a697867feb6" satisfied condition "Succeeded or Failed" Nov 5 23:24:10.589: INFO: Trying to get logs from node node2 pod pod-0510d783-3f3d-4e43-b7f8-1a697867feb6 container test-container: STEP: delete the pod Nov 5 23:24:10.602: INFO: Waiting for pod pod-0510d783-3f3d-4e43-b7f8-1a697867feb6 to disappear Nov 5 23:24:10.604: INFO: Pod pod-0510d783-3f3d-4e43-b7f8-1a697867feb6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:24:10.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6465" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":242,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:23:14.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 5 23:23:14.606: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:24:15.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-618" for this suite. • [SLOW TEST:61.302 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":-1,"completed":7,"skipped":96,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:24:10.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should observe PodDisruptionBudget status updated [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Waiting for the pdb to be processed STEP: Waiting for all pods to be running Nov 5 23:24:12.701: INFO: running pods: 0 < 3 Nov 5 23:24:14.703: INFO: running pods: 0 < 3 [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:24:16.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-9581" for this suite. • [SLOW TEST:6.084 seconds] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should observe PodDisruptionBudget status updated [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]","total":-1,"completed":16,"skipped":252,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:24:06.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8473 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8473;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8473 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8473;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8473.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8473.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8473.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8473.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8473.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-8473.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8473.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-8473.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8473.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-8473.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8473.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-8473.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8473.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 231.44.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.44.231_udp@PTR;check="$$(dig +tcp +noall +answer +search 231.44.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.44.231_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8473 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8473;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8473 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8473;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8473.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8473.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8473.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8473.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8473.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-8473.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8473.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-8473.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8473.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-8473.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8473.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-8473.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8473.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 231.44.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.44.231_udp@PTR;check="$$(dig +tcp +noall +answer +search 231.44.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.44.231_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Nov 5 23:24:12.527: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8473/dns-test-faacc1d2-5a87-4749-87fb-446948d98172: the server could not find the requested resource (get pods dns-test-faacc1d2-5a87-4749-87fb-446948d98172) Nov 5 23:24:12.530: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8473/dns-test-faacc1d2-5a87-4749-87fb-446948d98172: the server could not find the requested resource (get pods dns-test-faacc1d2-5a87-4749-87fb-446948d98172) Nov 5 23:24:12.533: INFO: Unable to read wheezy_udp@dns-test-service.dns-8473 from pod dns-8473/dns-test-faacc1d2-5a87-4749-87fb-446948d98172: the server could not find the requested resource (get pods dns-test-faacc1d2-5a87-4749-87fb-446948d98172) Nov 5 23:24:12.536: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8473 from pod dns-8473/dns-test-faacc1d2-5a87-4749-87fb-446948d98172: the server could not find the requested resource (get pods dns-test-faacc1d2-5a87-4749-87fb-446948d98172) Nov 5 23:24:12.538: INFO: Unable to read wheezy_udp@dns-test-service.dns-8473.svc from pod dns-8473/dns-test-faacc1d2-5a87-4749-87fb-446948d98172: the server could not find the requested resource (get pods dns-test-faacc1d2-5a87-4749-87fb-446948d98172) Nov 5 23:24:12.541: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8473.svc from pod dns-8473/dns-test-faacc1d2-5a87-4749-87fb-446948d98172: the server could not find the requested resource (get pods dns-test-faacc1d2-5a87-4749-87fb-446948d98172) Nov 5 23:24:12.543: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8473.svc from pod dns-8473/dns-test-faacc1d2-5a87-4749-87fb-446948d98172: the server could not find the requested resource (get pods dns-test-faacc1d2-5a87-4749-87fb-446948d98172) Nov 5 23:24:12.545: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8473.svc from pod dns-8473/dns-test-faacc1d2-5a87-4749-87fb-446948d98172: the server could not find the requested resource (get pods dns-test-faacc1d2-5a87-4749-87fb-446948d98172) Nov 5 23:24:12.562: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8473/dns-test-faacc1d2-5a87-4749-87fb-446948d98172: the server could not find the requested resource (get pods dns-test-faacc1d2-5a87-4749-87fb-446948d98172) Nov 5 23:24:12.564: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8473/dns-test-faacc1d2-5a87-4749-87fb-446948d98172: the server could not find the requested resource (get pods dns-test-faacc1d2-5a87-4749-87fb-446948d98172) Nov 5 23:24:12.566: INFO: Unable to read jessie_udp@dns-test-service.dns-8473 from pod dns-8473/dns-test-faacc1d2-5a87-4749-87fb-446948d98172: the server could not find the requested resource (get pods dns-test-faacc1d2-5a87-4749-87fb-446948d98172) Nov 5 23:24:12.568: INFO: Unable to read jessie_tcp@dns-test-service.dns-8473 from pod dns-8473/dns-test-faacc1d2-5a87-4749-87fb-446948d98172: the server could not find the requested resource (get pods dns-test-faacc1d2-5a87-4749-87fb-446948d98172) Nov 5 23:24:12.570: INFO: Unable to read jessie_udp@dns-test-service.dns-8473.svc from pod dns-8473/dns-test-faacc1d2-5a87-4749-87fb-446948d98172: the server could not find the requested resource (get pods dns-test-faacc1d2-5a87-4749-87fb-446948d98172) Nov 5 23:24:12.572: INFO: Unable to read jessie_tcp@dns-test-service.dns-8473.svc from pod dns-8473/dns-test-faacc1d2-5a87-4749-87fb-446948d98172: the server could not find the requested resource (get pods dns-test-faacc1d2-5a87-4749-87fb-446948d98172) Nov 5 23:24:12.575: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8473.svc from pod dns-8473/dns-test-faacc1d2-5a87-4749-87fb-446948d98172: the server could not find the requested resource (get pods dns-test-faacc1d2-5a87-4749-87fb-446948d98172) Nov 5 23:24:12.577: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8473.svc from pod dns-8473/dns-test-faacc1d2-5a87-4749-87fb-446948d98172: the server could not find the requested resource (get pods dns-test-faacc1d2-5a87-4749-87fb-446948d98172) Nov 5 23:24:12.593: INFO: Lookups using dns-8473/dns-test-faacc1d2-5a87-4749-87fb-446948d98172 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8473 wheezy_tcp@dns-test-service.dns-8473 wheezy_udp@dns-test-service.dns-8473.svc wheezy_tcp@dns-test-service.dns-8473.svc wheezy_udp@_http._tcp.dns-test-service.dns-8473.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8473.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8473 jessie_tcp@dns-test-service.dns-8473 jessie_udp@dns-test-service.dns-8473.svc jessie_tcp@dns-test-service.dns-8473.svc jessie_udp@_http._tcp.dns-test-service.dns-8473.svc jessie_tcp@_http._tcp.dns-test-service.dns-8473.svc] Nov 5 23:24:17.662: INFO: DNS probes using dns-8473/dns-test-faacc1d2-5a87-4749-87fb-446948d98172 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:24:17.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8473" for this suite. • [SLOW TEST:11.232 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":-1,"completed":17,"skipped":331,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:24:07.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Nov 5 23:24:07.501: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Nov 5 23:24:09.511: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751447, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751447, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751447, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751447, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 5 23:24:12.523: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 5 23:24:12.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:24:20.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8377" for this suite. STEP: Destroying namespace "webhook-8377-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.889 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":-1,"completed":6,"skipped":52,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:24:20.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename runtimeclass STEP: Waiting for a default service account to be provisioned in namespace [It] should support RuntimeClasses API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/node.k8s.io STEP: getting /apis/node.k8s.io/v1 STEP: creating STEP: watching Nov 5 23:24:20.250: INFO: starting watch STEP: getting STEP: listing STEP: patching STEP: updating Nov 5 23:24:20.263: INFO: waiting for watch events with expected annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:24:20.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "runtimeclass-3318" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance]","total":-1,"completed":7,"skipped":83,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:24:17.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-map-2d78d7e2-b750-4028-8bc2-60f6004c27a8 STEP: Creating a pod to test consume configMaps Nov 5 23:24:17.829: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c077387d-cc0c-48cf-89f6-35a297d042f8" in namespace "projected-3788" to be "Succeeded or Failed" Nov 5 23:24:17.833: INFO: Pod "pod-projected-configmaps-c077387d-cc0c-48cf-89f6-35a297d042f8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.69633ms Nov 5 23:24:19.837: INFO: Pod "pod-projected-configmaps-c077387d-cc0c-48cf-89f6-35a297d042f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007302931s Nov 5 23:24:21.841: INFO: Pod "pod-projected-configmaps-c077387d-cc0c-48cf-89f6-35a297d042f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011062944s STEP: Saw pod success Nov 5 23:24:21.841: INFO: Pod "pod-projected-configmaps-c077387d-cc0c-48cf-89f6-35a297d042f8" satisfied condition "Succeeded or Failed" Nov 5 23:24:21.843: INFO: Trying to get logs from node node1 pod pod-projected-configmaps-c077387d-cc0c-48cf-89f6-35a297d042f8 container agnhost-container: STEP: delete the pod Nov 5 23:24:21.871: INFO: Waiting for pod pod-projected-configmaps-c077387d-cc0c-48cf-89f6-35a297d042f8 to disappear Nov 5 23:24:21.872: INFO: Pod pod-projected-configmaps-c077387d-cc0c-48cf-89f6-35a297d042f8 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:24:21.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3788" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":384,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:24:15.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating the pod Nov 5 23:24:15.947: INFO: The status of Pod annotationupdatec426ef59-9885-4fe2-80d1-83977e06fc5c is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:24:17.950: INFO: The status of Pod annotationupdatec426ef59-9885-4fe2-80d1-83977e06fc5c is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:24:19.951: INFO: The status of Pod annotationupdatec426ef59-9885-4fe2-80d1-83977e06fc5c is Running (Ready = true) Nov 5 23:24:20.476: INFO: Successfully updated pod "annotationupdatec426ef59-9885-4fe2-80d1-83977e06fc5c" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:24:22.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9600" for this suite. • [SLOW TEST:6.601 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":105,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:23:10.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name s-test-opt-del-6cc71fb2-86bb-4d31-9d28-4bfae2ef14a4 STEP: Creating secret with name s-test-opt-upd-96574239-2c52-4b47-81c1-cc2f3b371f1d STEP: Creating the pod Nov 5 23:23:10.611: INFO: The status of Pod pod-secrets-6f6644c2-e5e3-4181-be75-44774c531388 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:23:12.614: INFO: The status of Pod pod-secrets-6f6644c2-e5e3-4181-be75-44774c531388 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:23:14.614: INFO: The status of Pod pod-secrets-6f6644c2-e5e3-4181-be75-44774c531388 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:23:16.614: INFO: The status of Pod pod-secrets-6f6644c2-e5e3-4181-be75-44774c531388 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:23:18.615: INFO: The status of Pod pod-secrets-6f6644c2-e5e3-4181-be75-44774c531388 is Running (Ready = true) STEP: Deleting secret s-test-opt-del-6cc71fb2-86bb-4d31-9d28-4bfae2ef14a4 STEP: Updating secret s-test-opt-upd-96574239-2c52-4b47-81c1-cc2f3b371f1d STEP: Creating secret with name s-test-opt-create-63a2fbea-269a-4164-ba1d-3bf94af12803 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:24:27.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8447" for this suite. • [SLOW TEST:77.110 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":217,"failed":0} SS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:24:20.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 5 23:24:20.438: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Nov 5 23:24:28.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1180 --namespace=crd-publish-openapi-1180 create -f -' Nov 5 23:24:29.416: INFO: stderr: "" Nov 5 23:24:29.416: INFO: stdout: "e2e-test-crd-publish-openapi-7191-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Nov 5 23:24:29.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1180 --namespace=crd-publish-openapi-1180 delete e2e-test-crd-publish-openapi-7191-crds test-cr' Nov 5 23:24:29.575: INFO: stderr: "" Nov 5 23:24:29.575: INFO: stdout: "e2e-test-crd-publish-openapi-7191-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Nov 5 23:24:29.575: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1180 --namespace=crd-publish-openapi-1180 apply -f -' Nov 5 23:24:29.915: INFO: stderr: "" Nov 5 23:24:29.915: INFO: stdout: "e2e-test-crd-publish-openapi-7191-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Nov 5 23:24:29.915: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1180 --namespace=crd-publish-openapi-1180 delete e2e-test-crd-publish-openapi-7191-crds test-cr' Nov 5 23:24:30.092: INFO: stderr: "" Nov 5 23:24:30.092: INFO: stdout: "e2e-test-crd-publish-openapi-7191-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Nov 5 23:24:30.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1180 explain e2e-test-crd-publish-openapi-7191-crds' Nov 5 23:24:30.423: INFO: stderr: "" Nov 5 23:24:30.423: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7191-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:24:33.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1180" for this suite. • [SLOW TEST:13.565 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":-1,"completed":8,"skipped":150,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:24:34.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if v1 is in available api versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: validating api versions Nov 5 23:24:34.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4763 api-versions' Nov 5 23:24:34.143: INFO: stderr: "" Nov 5 23:24:34.143: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ncustom.metrics.k8s.io/v1beta1\ndiscovery.k8s.io/v1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nflowcontrol.apiserver.k8s.io/v1beta1\nintel.com/v1\nk8s.cni.cncf.io/v1\nmonitoring.coreos.com/v1\nmonitoring.coreos.com/v1alpha1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1\nnode.k8s.io/v1beta1\npolicy/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\ntelemetry.intel.com/v1alpha1\nv1\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:24:34.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4763" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":-1,"completed":9,"skipped":167,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:24:21.964: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating the pod Nov 5 23:24:22.010: INFO: The status of Pod labelsupdate1a50fec4-9632-4622-8548-b28ba651eeb9 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:24:24.014: INFO: The status of Pod labelsupdate1a50fec4-9632-4622-8548-b28ba651eeb9 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:24:26.014: INFO: The status of Pod labelsupdate1a50fec4-9632-4622-8548-b28ba651eeb9 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:24:28.013: INFO: The status of Pod labelsupdate1a50fec4-9632-4622-8548-b28ba651eeb9 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:24:30.014: INFO: The status of Pod labelsupdate1a50fec4-9632-4622-8548-b28ba651eeb9 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:24:32.016: INFO: The status of Pod labelsupdate1a50fec4-9632-4622-8548-b28ba651eeb9 is Running (Ready = true) Nov 5 23:24:32.538: INFO: Successfully updated pod "labelsupdate1a50fec4-9632-4622-8548-b28ba651eeb9" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:24:36.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5088" for this suite. • [SLOW TEST:14.962 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":429,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:24:36.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create set of events STEP: get a list of Events with a label in the current namespace STEP: delete a list of events Nov 5 23:24:36.969: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:24:36.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-8703" for this suite. • ------------------------------ {"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":-1,"completed":20,"skipped":434,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:24:27.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 [It] should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating server pod server in namespace prestop-7455 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-7455 STEP: Deleting pre-stop pod Nov 5 23:24:42.754: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:24:42.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-7455" for this suite. • [SLOW TEST:15.091 seconds] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":-1,"completed":14,"skipped":219,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:24:42.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0777 on tmpfs Nov 5 23:24:42.808: INFO: Waiting up to 5m0s for pod "pod-8f49a000-0a4a-470f-b58c-5ed5d985c92b" in namespace "emptydir-4488" to be "Succeeded or Failed" Nov 5 23:24:42.814: INFO: Pod "pod-8f49a000-0a4a-470f-b58c-5ed5d985c92b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.251838ms Nov 5 23:24:44.817: INFO: Pod "pod-8f49a000-0a4a-470f-b58c-5ed5d985c92b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008224712s Nov 5 23:24:46.820: INFO: Pod "pod-8f49a000-0a4a-470f-b58c-5ed5d985c92b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01186665s STEP: Saw pod success Nov 5 23:24:46.820: INFO: Pod "pod-8f49a000-0a4a-470f-b58c-5ed5d985c92b" satisfied condition "Succeeded or Failed" Nov 5 23:24:46.824: INFO: Trying to get logs from node node1 pod pod-8f49a000-0a4a-470f-b58c-5ed5d985c92b container test-container: STEP: delete the pod Nov 5 23:24:46.836: INFO: Waiting for pod pod-8f49a000-0a4a-470f-b58c-5ed5d985c92b to disappear Nov 5 23:24:46.838: INFO: Pod pod-8f49a000-0a4a-470f-b58c-5ed5d985c92b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:24:46.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4488" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":222,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:24:46.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-9e9a45e4-4748-4968-9e77-02b7780d4120 STEP: Creating a pod to test consume configMaps Nov 5 23:24:46.897: INFO: Waiting up to 5m0s for pod "pod-configmaps-d116c789-15b1-4890-a8cd-6a65bc760bea" in namespace "configmap-8531" to be "Succeeded or Failed" Nov 5 23:24:46.900: INFO: Pod "pod-configmaps-d116c789-15b1-4890-a8cd-6a65bc760bea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.865623ms Nov 5 23:24:48.903: INFO: Pod "pod-configmaps-d116c789-15b1-4890-a8cd-6a65bc760bea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005323075s Nov 5 23:24:50.907: INFO: Pod "pod-configmaps-d116c789-15b1-4890-a8cd-6a65bc760bea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009897302s STEP: Saw pod success Nov 5 23:24:50.908: INFO: Pod "pod-configmaps-d116c789-15b1-4890-a8cd-6a65bc760bea" satisfied condition "Succeeded or Failed" Nov 5 23:24:50.911: INFO: Trying to get logs from node node2 pod pod-configmaps-d116c789-15b1-4890-a8cd-6a65bc760bea container configmap-volume-test: STEP: delete the pod Nov 5 23:24:50.925: INFO: Waiting for pod pod-configmaps-d116c789-15b1-4890-a8cd-6a65bc760bea to disappear Nov 5 23:24:50.927: INFO: Pod pod-configmaps-d116c789-15b1-4890-a8cd-6a65bc760bea no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:24:50.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8531" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":225,"failed":0} SSS ------------------------------ [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:24:50.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-nqddh in namespace proxy-3627 I1105 23:24:50.984847 26 runners.go:190] Created replication controller with name: proxy-service-nqddh, namespace: proxy-3627, replica count: 1 I1105 23:24:52.035920 26 runners.go:190] proxy-service-nqddh Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1105 23:24:53.036459 26 runners.go:190] proxy-service-nqddh Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1105 23:24:54.037324 26 runners.go:190] proxy-service-nqddh Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1105 23:24:55.037858 26 runners.go:190] proxy-service-nqddh Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1105 23:24:56.038841 26 runners.go:190] proxy-service-nqddh Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1105 23:24:57.039606 26 runners.go:190] proxy-service-nqddh Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1105 23:24:58.040614 26 runners.go:190] proxy-service-nqddh Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1105 23:24:59.042081 26 runners.go:190] proxy-service-nqddh Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1105 23:25:00.043042 26 runners.go:190] proxy-service-nqddh Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1105 23:25:01.044148 26 runners.go:190] proxy-service-nqddh Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 5 23:25:01.046: INFO: setup took 10.077384021s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Nov 5 23:25:01.049: INFO: (0) /api/v1/namespaces/proxy-3627/pods/proxy-service-nqddh-mtgtl/proxy/: test (200; 2.817127ms) Nov 5 23:25:01.049: INFO: (0) /api/v1/namespaces/proxy-3627/pods/http:proxy-service-nqddh-mtgtl:160/proxy/: foo (200; 2.881941ms) Nov 5 23:25:01.049: INFO: (0) /api/v1/namespaces/proxy-3627/pods/http:proxy-service-nqddh-mtgtl:1080/proxy/: ... (200; 3.090036ms) Nov 5 23:25:01.053: INFO: (0) /api/v1/namespaces/proxy-3627/pods/proxy-service-nqddh-mtgtl:1080/proxy/: test<... (200; 7.024745ms) Nov 5 23:25:01.053: INFO: (0) /api/v1/namespaces/proxy-3627/services/http:proxy-service-nqddh:portname1/proxy/: foo (200; 7.100772ms) Nov 5 23:25:01.057: INFO: (0) /api/v1/namespaces/proxy-3627/pods/proxy-service-nqddh-mtgtl:162/proxy/: bar (200; 10.881853ms) Nov 5 23:25:01.057: INFO: (0) /api/v1/namespaces/proxy-3627/services/http:proxy-service-nqddh:portname2/proxy/: bar (200; 11.189536ms) Nov 5 23:25:01.057: INFO: (0) /api/v1/namespaces/proxy-3627/pods/http:proxy-service-nqddh-mtgtl:162/proxy/: bar (200; 10.883741ms) Nov 5 23:25:01.057: INFO: (0) /api/v1/namespaces/proxy-3627/services/proxy-service-nqddh:portname2/proxy/: bar (200; 10.95083ms) Nov 5 23:25:01.059: INFO: (0) /api/v1/namespaces/proxy-3627/services/proxy-service-nqddh:portname1/proxy/: foo (200; 12.655567ms) Nov 5 23:25:01.059: INFO: (0) /api/v1/namespaces/proxy-3627/pods/proxy-service-nqddh-mtgtl:160/proxy/: foo (200; 12.829052ms) Nov 5 23:25:01.059: INFO: (0) /api/v1/namespaces/proxy-3627/pods/https:proxy-service-nqddh-mtgtl:460/proxy/: tls baz (200; 12.97796ms) Nov 5 23:25:01.059: INFO: (0) /api/v1/namespaces/proxy-3627/pods/https:proxy-service-nqddh-mtgtl:462/proxy/: tls qux (200; 13.123686ms) Nov 5 23:25:01.059: INFO: (0) /api/v1/namespaces/proxy-3627/services/https:proxy-service-nqddh:tlsportname1/proxy/: tls baz (200; 12.959036ms) Nov 5 23:25:01.059: INFO: (0) /api/v1/namespaces/proxy-3627/services/https:proxy-service-nqddh:tlsportname2/proxy/: tls qux (200; 12.997175ms) Nov 5 23:25:01.060: INFO: (0) /api/v1/namespaces/proxy-3627/pods/https:proxy-service-nqddh-mtgtl:443/proxy/: ... (200; 2.805062ms) Nov 5 23:25:01.063: INFO: (1) /api/v1/namespaces/proxy-3627/pods/https:proxy-service-nqddh-mtgtl:460/proxy/: tls baz (200; 2.877482ms) Nov 5 23:25:01.063: INFO: (1) /api/v1/namespaces/proxy-3627/pods/proxy-service-nqddh-mtgtl:1080/proxy/: test<... (200; 2.918222ms) Nov 5 23:25:01.063: INFO: (1) /api/v1/namespaces/proxy-3627/services/http:proxy-service-nqddh:portname2/proxy/: bar (200; 3.043224ms) Nov 5 23:25:01.063: INFO: (1) /api/v1/namespaces/proxy-3627/pods/https:proxy-service-nqddh-mtgtl:462/proxy/: tls qux (200; 3.135956ms) Nov 5 23:25:01.063: INFO: (1) /api/v1/namespaces/proxy-3627/services/http:proxy-service-nqddh:portname1/proxy/: foo (200; 3.380341ms) Nov 5 23:25:01.063: INFO: (1) /api/v1/namespaces/proxy-3627/pods/proxy-service-nqddh-mtgtl:162/proxy/: bar (200; 3.205953ms) Nov 5 23:25:01.063: INFO: (1) /api/v1/namespaces/proxy-3627/pods/proxy-service-nqddh-mtgtl/proxy/: test (200; 3.319042ms) Nov 5 23:25:01.063: INFO: (1) /api/v1/namespaces/proxy-3627/services/proxy-service-nqddh:portname1/proxy/: foo (200; 3.543436ms) Nov 5 23:25:01.063: INFO: (1) /api/v1/namespaces/proxy-3627/services/https:proxy-service-nqddh:tlsportname2/proxy/: tls qux (200; 3.551528ms) Nov 5 23:25:01.064: INFO: (1) /api/v1/namespaces/proxy-3627/services/proxy-service-nqddh:portname2/proxy/: bar (200; 3.89908ms) Nov 5 23:25:01.064: INFO: (1) /api/v1/namespaces/proxy-3627/services/https:proxy-service-nqddh:tlsportname1/proxy/: tls baz (200; 3.960272ms) Nov 5 23:25:01.067: INFO: (2) /api/v1/namespaces/proxy-3627/pods/https:proxy-service-nqddh-mtgtl:460/proxy/: tls baz (200; 2.740946ms) Nov 5 23:25:01.067: INFO: (2) /api/v1/namespaces/proxy-3627/pods/proxy-service-nqddh-mtgtl:1080/proxy/: test<... (200; 2.752322ms) Nov 5 23:25:01.067: INFO: (2) /api/v1/namespaces/proxy-3627/pods/https:proxy-service-nqddh-mtgtl:443/proxy/: ... (200; 2.700715ms) Nov 5 23:25:01.067: INFO: (2) /api/v1/namespaces/proxy-3627/pods/http:proxy-service-nqddh-mtgtl:162/proxy/: bar (200; 2.643576ms) Nov 5 23:25:01.067: INFO: (2) /api/v1/namespaces/proxy-3627/pods/http:proxy-service-nqddh-mtgtl:160/proxy/: foo (200; 2.905015ms) Nov 5 23:25:01.068: INFO: (2) /api/v1/namespaces/proxy-3627/pods/https:proxy-service-nqddh-mtgtl:462/proxy/: tls qux (200; 3.639456ms) Nov 5 23:25:01.068: INFO: (2) /api/v1/namespaces/proxy-3627/pods/proxy-service-nqddh-mtgtl:160/proxy/: foo (200; 3.849614ms) Nov 5 23:25:01.068: INFO: (2) /api/v1/namespaces/proxy-3627/services/http:proxy-service-nqddh:portname1/proxy/: foo (200; 3.75589ms) Nov 5 23:25:01.068: INFO: (2) /api/v1/namespaces/proxy-3627/pods/proxy-service-nqddh-mtgtl:162/proxy/: bar (200; 3.704686ms) Nov 5 23:25:01.068: INFO: (2) /api/v1/namespaces/proxy-3627/services/proxy-service-nqddh:portname1/proxy/: foo (200; 3.982613ms) Nov 5 23:25:01.068: INFO: (2) /api/v1/namespaces/proxy-3627/pods/proxy-service-nqddh-mtgtl/proxy/: test (200; 3.720841ms) Nov 5 23:25:01.068: INFO: (2) /api/v1/namespaces/proxy-3627/services/https:proxy-service-nqddh:tlsportname2/proxy/: tls qux (200; 3.824375ms) Nov 5 23:25:01.068: INFO: (2) /api/v1/namespaces/proxy-3627/services/https:proxy-service-nqddh:tlsportname1/proxy/: tls baz (200; 3.996407ms) Nov 5 23:25:01.068: INFO: (2) /api/v1/namespaces/proxy-3627/services/http:proxy-service-nqddh:portname2/proxy/: bar (200; 4.211535ms) Nov 5 23:25:01.068: INFO: (2) /api/v1/namespaces/proxy-3627/services/proxy-service-nqddh:portname2/proxy/: bar (200; 4.189029ms) Nov 5 23:25:01.071: INFO: (3) /api/v1/namespaces/proxy-3627/pods/proxy-service-nqddh-mtgtl:162/proxy/: bar (200; 1.977934ms) Nov 5 23:25:01.071: INFO: (3) /api/v1/namespaces/proxy-3627/pods/http:proxy-service-nqddh-mtgtl:1080/proxy/: ... (200; 2.631312ms) Nov 5 23:25:01.071: INFO: (3) /api/v1/namespaces/proxy-3627/pods/http:proxy-service-nqddh-mtgtl:162/proxy/: bar (200; 2.612214ms) Nov 5 23:25:01.071: INFO: (3) /api/v1/namespaces/proxy-3627/pods/proxy-service-nqddh-mtgtl:1080/proxy/: test<... (200; 2.723843ms) Nov 5 23:25:01.071: INFO: (3) /api/v1/namespaces/proxy-3627/pods/proxy-service-nqddh-mtgtl/proxy/: test (200; 2.695441ms) Nov 5 23:25:01.071: INFO: (3) /api/v1/namespaces/proxy-3627/services/http:proxy-service-nqddh:portname2/proxy/: bar (200; 2.893065ms) Nov 5 23:25:01.071: INFO: (3) /api/v1/namespaces/proxy-3627/pods/https:proxy-service-nqddh-mtgtl:443/proxy/: ... (200; 2.516416ms) Nov 5 23:25:01.076: INFO: (4) /api/v1/namespaces/proxy-3627/pods/http:proxy-service-nqddh-mtgtl:160/proxy/: foo (200; 2.38742ms) Nov 5 23:25:01.076: INFO: (4) /api/v1/namespaces/proxy-3627/pods/https:proxy-service-nqddh-mtgtl:462/proxy/: tls qux (200; 2.341086ms) Nov 5 23:25:01.076: INFO: (4) /api/v1/namespaces/proxy-3627/pods/proxy-service-nqddh-mtgtl:160/proxy/: foo (200; 2.660752ms) Nov 5 23:25:01.076: INFO: (4) /api/v1/namespaces/proxy-3627/pods/proxy-service-nqddh-mtgtl/proxy/: test (200; 2.505615ms) Nov 5 23:25:01.076: INFO: (4) /api/v1/namespaces/proxy-3627/pods/proxy-service-nqddh-mtgtl:1080/proxy/: test<... (200; 2.419993ms) Nov 5 23:25:01.076: INFO: (4) /api/v1/namespaces/proxy-3627/pods/https:proxy-service-nqddh-mtgtl:460/proxy/: tls baz (200; 2.818667ms) Nov 5 23:25:01.076: INFO: (4) /api/v1/namespaces/proxy-3627/pods/https:proxy-service-nqddh-mtgtl:443/proxy/: ... (200; 2.646746ms) Nov 5 23:25:01.080: INFO: (5) /api/v1/namespaces/proxy-3627/pods/https:proxy-service-nqddh-mtgtl:462/proxy/: tls qux (200; 2.645078ms) Nov 5 23:25:01.080: INFO: (5) /api/v1/namespaces/proxy-3627/pods/proxy-service-nqddh-mtgtl:160/proxy/: foo (200; 2.736225ms) Nov 5 23:25:01.080: INFO: (5) /api/v1/namespaces/proxy-3627/pods/proxy-service-nqddh-mtgtl/proxy/: test (200; 3.072544ms) Nov 5 23:25:01.080: INFO: (5) /api/v1/namespaces/proxy-3627/pods/https:proxy-service-nqddh-mtgtl:443/proxy/: test<... (200; 3.195017ms) Nov 5 23:25:01.081: INFO: (5) /api/v1/namespaces/proxy-3627/pods/https:proxy-service-nqddh-mtgtl:460/proxy/: tls baz (200; 3.102356ms) Nov 5 23:25:01.081: INFO: (5) /api/v1/namespaces/proxy-3627/services/http:proxy-service-nqddh:portname1/proxy/: foo (200; 3.162386ms) Nov 5 23:25:01.081: INFO: (5) /api/v1/namespaces/proxy-3627/services/https:proxy-service-nqddh:tlsportname1/proxy/: tls baz (200; 3.440811ms) Nov 5 23:25:01.081: INFO: (5) /api/v1/namespaces/proxy-3627/services/https:proxy-service-nqddh:tlsportname2/proxy/: tls qux (200; 3.953992ms) Nov 5 23:25:01.081: INFO: (5) /api/v1/namespaces/proxy-3627/services/http:proxy-service-nqddh:portname2/proxy/: bar (200; 3.993636ms) Nov 5 23:25:01.081: INFO: (5) /api/v1/namespaces/proxy-3627/services/proxy-service-nqddh:portname2/proxy/: bar (200; 3.979594ms) Nov 5 23:25:01.082: INFO: (5) /api/v1/namespaces/proxy-3627/services/proxy-service-nqddh:portname1/proxy/: foo (200; 4.235694ms) Nov 5 23:25:01.084: INFO: (6) /api/v1/namespaces/proxy-3627/pods/http:proxy-service-nqddh-mtgtl:1080/proxy/: ... (200; 2.264634ms) Nov 5 23:25:01.084: INFO: (6) /api/v1/namespaces/proxy-3627/pods/https:proxy-service-nqddh-mtgtl:462/proxy/: tls qux (200; 2.435515ms) Nov 5 23:25:01.084: INFO: (6) /api/v1/namespaces/proxy-3627/pods/proxy-service-nqddh-mtgtl:1080/proxy/: test<... (200; 2.330013ms) Nov 5 23:25:01.084: INFO: (6) /api/v1/namespaces/proxy-3627/pods/proxy-service-nqddh-mtgtl:160/proxy/: foo (200; 2.433989ms) Nov 5 23:25:01.085: INFO: (6) /api/v1/namespaces/proxy-3627/pods/http:proxy-service-nqddh-mtgtl:160/proxy/: foo (200; 2.746137ms) Nov 5 23:25:01.085: INFO: (6) /api/v1/namespaces/proxy-3627/pods/https:proxy-service-nqddh-mtgtl:460/proxy/: tls baz (200; 2.833874ms) Nov 5 23:25:01.085: INFO: (6) /api/v1/namespaces/proxy-3627/pods/http:proxy-service-nqddh-mtgtl:162/proxy/: bar (200; 3.025965ms) Nov 5 23:25:01.085: INFO: (6) /api/v1/namespaces/proxy-3627/pods/https:proxy-service-nqddh-mtgtl:443/proxy/: test (200; 3.062272ms) Nov 5 23:25:01.085: INFO: (6) /api/v1/namespaces/proxy-3627/pods/proxy-service-nqddh-mtgtl:162/proxy/: bar (200; 2.853791ms) Nov 5 23:25:01.085: INFO: (6) /api/v1/namespaces/proxy-3627/services/http:proxy-service-nqddh:portname1/proxy/: foo (200; 2.938611ms) Nov 5 23:25:01.085: INFO: (6) /api/v1/namespaces/proxy-3627/services/http:proxy-service-nqddh:portname2/proxy/: bar (200; 3.37412ms) Nov 5 23:25:01.085: INFO: (6) /api/v1/namespaces/proxy-3627/services/https:proxy-service-nqddh:tlsportname1/proxy/: tls baz (200; 3.296485ms) Nov 5 23:25:01.085: INFO: (6) /api/v1/namespaces/proxy-3627/services/proxy-service-nqddh:portname1/proxy/: foo (200; 3.52774ms) Nov 5 23:25:01.086: INFO: (6) /api/v1/namespaces/proxy-3627/services/proxy-service-nqddh:portname2/proxy/: bar (200; 4.043531ms) Nov 5 23:25:01.086: INFO: (6) /api/v1/namespaces/proxy-3627/services/https:proxy-service-nqddh:tlsportname2/proxy/: tls qux (200; 4.303582ms) Nov 5 23:25:01.088: INFO: (7) /api/v1/namespaces/proxy-3627/pods/proxy-service-nqddh-mtgtl:162/proxy/: bar (200; 1.68456ms) Nov 5 23:25:01.088: INFO: (7) /api/v1/namespaces/proxy-3627/pods/https:proxy-service-nqddh-mtgtl:462/proxy/: tls qux (200; 1.868463ms) Nov 5 23:25:01.088: INFO: (7) /api/v1/namespaces/proxy-3627/pods/https:proxy-service-nqddh-mtgtl:460/proxy/: tls baz (200; 1.979059ms) Nov 5 23:25:01.089: INFO: (7) /api/v1/namespaces/proxy-3627/pods/http:proxy-service-nqddh-mtgtl:162/proxy/: bar (200; 2.068287ms) Nov 5 23:25:01.089: INFO: (7) /api/v1/namespaces/proxy-3627/pods/http:proxy-service-nqddh-mtgtl:160/proxy/: foo (200; 2.463553ms) Nov 5 23:25:01.089: INFO: (7) /api/v1/namespaces/proxy-3627/pods/proxy-service-nqddh-mtgtl:1080/proxy/: test<... (200; 2.496266ms) Nov 5 23:25:01.089: INFO: (7) /api/v1/namespaces/proxy-3627/pods/http:proxy-service-nqddh-mtgtl:1080/proxy/: ... (200; 2.470749ms) Nov 5 23:25:01.089: INFO: (7) /api/v1/namespaces/proxy-3627/pods/proxy-service-nqddh-mtgtl/proxy/: test (200; 2.766542ms) Nov 5 23:25:01.090: INFO: (7) /api/v1/namespaces/proxy-3627/services/http:proxy-service-nqddh:portname1/proxy/: foo (200; 3.10358ms) Nov 5 23:25:01.090: INFO: (7) /api/v1/namespaces/proxy-3627/services/proxy-service-nqddh:portname1/proxy/: foo (200; 3.282912ms) Nov 5 23:25:01.090: INFO: (7) /api/v1/namespaces/proxy-3627/pods/proxy-service-nqddh-mtgtl:160/proxy/: foo (200; 3.224609ms) Nov 5 23:25:01.090: INFO: (7) /api/v1/namespaces/proxy-3627/pods/https:proxy-service-nqddh-mtgtl:443/proxy/: test<... (200; 2.52822ms) Nov 5 23:25:01.094: INFO: (8) /api/v1/namespaces/proxy-3627/pods/proxy-service-nqddh-mtgtl/proxy/: test (200; 2.484219ms) Nov 5 23:25:01.094: INFO: (8) /api/v1/namespaces/proxy-3627/pods/https:proxy-service-nqddh-mtgtl:460/proxy/: tls baz (200; 2.545947ms) Nov 5 23:25:01.094: INFO: (8) /api/v1/namespaces/proxy-3627/pods/http:proxy-service-nqddh-mtgtl:1080/proxy/: ... (200; 3.005464ms) Nov 5 23:25:01.094: INFO: (8) /api/v1/namespaces/proxy-3627/pods/http:proxy-service-nqddh-mtgtl:162/proxy/: bar (200; 2.82082ms) Nov 5 23:25:01.094: INFO: (8) /api/v1/namespaces/proxy-3627/pods/https:proxy-service-nqddh-mtgtl:462/proxy/: tls qux (200; 2.85605ms) Nov 5 23:25:01.094: INFO: (8) /api/v1/namespaces/proxy-3627/pods/http:proxy-service-nqddh-mtgtl:160/proxy/: foo (200; 2.981491ms) Nov 5 23:25:01.094: INFO: (8) /api/v1/namespaces/proxy-3627/pods/proxy-service-nqddh-mtgtl:160/proxy/: foo (200; 2.882017ms) Nov 5 23:25:01.094: INFO: (8) /api/v1/namespaces/proxy-3627/pods/proxy-service-nqddh-mtgtl:162/proxy/: bar (200; 2.998205ms) Nov 5 23:25:01.094: INFO: (8) /api/v1/namespaces/proxy-3627/services/http:proxy-service-nqddh:portname2/proxy/: bar (200; 3.238844ms) Nov 5 23:25:01.094: INFO: (8) /api/v1/namespaces/proxy-3627/services/https:proxy-service-nqddh:tlsportname2/proxy/: tls qux (200; 3.570903ms) Nov 5 23:25:01.095: INFO: (8) /api/v1/namespaces/proxy-3627/services/https:proxy-service-nqddh:tlsportname1/proxy/: tls baz (200; 3.622435ms) Nov 5 23:25:01.095: INFO: (8) /api/v1/namespaces/proxy-3627/services/http:proxy-service-nqddh:portname1/proxy/: foo (200; 3.808033ms) Nov 5 23:25:01.095: INFO: (8) /api/v1/namespaces/proxy-3627/services/proxy-service-nqddh:portname2/proxy/: bar (200; 3.918241ms) Nov 5 23:25:01.095: INFO: (8) /api/v1/namespaces/proxy-3627/services/proxy-service-nqddh:portname1/proxy/: foo (200; 4.072397ms) Nov 5 23:25:01.097: INFO: (9) /api/v1/namespaces/proxy-3627/pods/https:proxy-service-nqddh-mtgtl:460/proxy/: tls baz (200; 1.973053ms) Nov 5 23:25:01.098: INFO: (9) /api/v1/namespaces/proxy-3627/pods/https:proxy-service-nqddh-mtgtl:443/proxy/: ... (200; 2.548433ms) Nov 5 23:25:01.098: INFO: (9) /api/v1/namespaces/proxy-3627/pods/http:proxy-service-nqddh-mtgtl:160/proxy/: foo (200; 2.718374ms) Nov 5 23:25:01.098: INFO: (9) /api/v1/namespaces/proxy-3627/pods/proxy-service-nqddh-mtgtl:160/proxy/: foo (200; 2.728011ms) Nov 5 23:25:01.098: INFO: (9) /api/v1/namespaces/proxy-3627/pods/https:proxy-service-nqddh-mtgtl:462/proxy/: tls qux (200; 2.928817ms) Nov 5 23:25:01.098: INFO: (9) /api/v1/namespaces/proxy-3627/pods/proxy-service-nqddh-mtgtl:162/proxy/: bar (200; 2.871736ms) Nov 5 23:25:01.098: INFO: (9) /api/v1/namespaces/proxy-3627/pods/http:proxy-service-nqddh-mtgtl:162/proxy/: bar (200; 2.86994ms) Nov 5 23:25:01.098: INFO: (9) /api/v1/namespaces/proxy-3627/services/https:proxy-service-nqddh:tlsportname1/proxy/: tls baz (200; 3.08261ms) Nov 5 23:25:01.099: INFO: (9) /api/v1/namespaces/proxy-3627/services/http:proxy-service-nqddh:portname2/proxy/: bar (200; 3.26524ms) Nov 5 23:25:01.099: INFO: (9) /api/v1/namespaces/proxy-3627/pods/proxy-service-nqddh-mtgtl/proxy/: test (200; 3.16277ms) Nov 5 23:25:01.099: INFO: (9) /api/v1/namespaces/proxy-3627/pods/proxy-service-nqddh-mtgtl:1080/proxy/: test<... (200; 3.167911ms) Nov 5 23:25:01.099: INFO: (9) /api/v1/namespaces/proxy-3627/services/http:proxy-service-nqddh:portname1/proxy/: foo (200; 3.314706ms) Nov 5 23:25:01.099: INFO: (9) /api/v1/namespaces/proxy-3627/services/proxy-service-nqddh:portname1/proxy/: foo (200; 3.898545ms) Nov 5 23:25:01.099: INFO: (9) /api/v1/namespaces/proxy-3627/services/proxy-service-nqddh:portname2/proxy/: bar (200; 3.727803ms) Nov 5 23:25:01.102: INFO: (9) /api/v1/namespaces/proxy-3627/services/https:proxy-service-nqddh:tlsportname2/proxy/: tls qux (200; 6.828695ms) Nov 5 23:25:01.105: INFO: (10) /api/v1/namespaces/proxy-3627/pods/proxy-service-nqddh-mtgtl/proxy/: test (200; 2.182369ms) Nov 5 23:25:01.105: INFO: (10) /api/v1/namespaces/proxy-3627/pods/https:proxy-service-nqddh-mtgtl:460/proxy/: tls baz (200; 2.184646ms) Nov 5 23:25:01.105: INFO: (10) /api/v1/namespaces/proxy-3627/pods/https:proxy-service-nqddh-mtgtl:443/proxy/: test<... (200; 2.158346ms) Nov 5 23:25:01.105: INFO: (10) /api/v1/namespaces/proxy-3627/pods/proxy-service-nqddh-mtgtl:162/proxy/: bar (200; 2.491216ms) Nov 5 23:25:01.105: INFO: (10) /api/v1/namespaces/proxy-3627/pods/http:proxy-service-nqddh-mtgtl:1080/proxy/: ... (200; 2.753792ms) Nov 5 23:25:01.105: INFO: (10) /api/v1/namespaces/proxy-3627/pods/http:proxy-service-nqddh-mtgtl:160/proxy/: foo (200; 3.050869ms) Nov 5 23:25:01.105: INFO: (10) /api/v1/namespaces/proxy-3627/pods/http:proxy-service-nqddh-mtgtl:162/proxy/: bar (200; 2.984808ms) Nov 5 23:25:01.106: INFO: (10) /api/v1/namespaces/proxy-3627/services/proxy-service-nqddh:portname2/proxy/: bar (200; 3.118102ms) Nov 5 23:25:01.106: INFO: (10) /api/v1/namespaces/proxy-3627/pods/proxy-service-nqddh-mtgtl:160/proxy/: foo (200; 3.247153ms) Nov 5 23:25:01.106: INFO: (10) /api/v1/namespaces/proxy-3627/pods/https:proxy-service-nqddh-mtgtl:462/proxy/: tls qux (200; 3.249128ms) Nov 5 23:25:01.106: INFO: (10) /api/v1/namespaces/proxy-3627/services/http:proxy-service-nqddh:portname2/proxy/: bar (200; 3.348293ms) Nov 5 23:25:01.106: INFO: (10) /api/v1/namespaces/proxy-3627/services/http:proxy-service-nqddh:portname1/proxy/: foo (200; 3.543972ms) Nov 5 23:25:01.106: INFO: (10) /api/v1/namespaces/proxy-3627/services/proxy-service-nqddh:portname1/proxy/: foo (200; 3.897894ms) Nov 5 23:25:01.107: INFO: (10) /api/v1/namespaces/proxy-3627/services/https:proxy-service-nqddh:tlsportname1/proxy/: tls baz (200; 3.991437ms) Nov 5 23:25:01.107: INFO: (10) /api/v1/namespaces/proxy-3627/services/https:proxy-service-nqddh:tlsportname2/proxy/: tls qux (200; 4.206685ms) Nov 5 23:25:01.109: INFO: (11) /api/v1/namespaces/proxy-3627/pods/proxy-service-nqddh-mtgtl:1080/proxy/: test<... (200; 2.147153ms) Nov 5 23:25:01.109: INFO: (11) /api/v1/namespaces/proxy-3627/pods/https:proxy-service-nqddh-mtgtl:460/proxy/: tls baz (200; 2.317045ms) Nov 5 23:25:01.109: INFO: (11) /api/v1/namespaces/proxy-3627/pods/https:proxy-service-nqddh-mtgtl:462/proxy/: tls qux (200; 2.471604ms) Nov 5 23:25:01.109: INFO: (11) /api/v1/namespaces/proxy-3627/pods/proxy-service-nqddh-mtgtl/proxy/: test (200; 2.346044ms) Nov 5 23:25:01.109: INFO: (11) /api/v1/namespaces/proxy-3627/pods/proxy-service-nqddh-mtgtl:162/proxy/: bar (200; 2.650473ms) Nov 5 23:25:01.110: INFO: (11) /api/v1/namespaces/proxy-3627/pods/http:proxy-service-nqddh-mtgtl:1080/proxy/: ... (200; 2.75556ms) Nov 5 23:25:01.110: INFO: (11) /api/v1/namespaces/proxy-3627/pods/http:proxy-service-nqddh-mtgtl:160/proxy/: foo (200; 2.6144ms) Nov 5 23:25:01.110: INFO: (11) /api/v1/namespaces/proxy-3627/pods/https:proxy-service-nqddh-mtgtl:443/proxy/: test<... (200; 2.007875ms) Nov 5 23:25:01.114: INFO: (12) /api/v1/namespaces/proxy-3627/pods/proxy-service-nqddh-mtgtl:162/proxy/: bar (200; 2.307783ms) Nov 5 23:25:01.114: INFO: (12) /api/v1/namespaces/proxy-3627/services/https:proxy-service-nqddh:tlsportname1/proxy/: tls baz (200; 2.94118ms) Nov 5 23:25:01.114: INFO: (12) /api/v1/namespaces/proxy-3627/pods/https:proxy-service-nqddh-mtgtl:443/proxy/: ... (200; 2.924983ms) Nov 5 23:25:01.114: INFO: (12) /api/v1/namespaces/proxy-3627/pods/proxy-service-nqddh-mtgtl:160/proxy/: foo (200; 2.740968ms) Nov 5 23:25:01.114: INFO: (12) /api/v1/namespaces/proxy-3627/services/proxy-service-nqddh:portname2/proxy/: bar (200; 2.952935ms) Nov 5 23:25:01.115: INFO: (12) /api/v1/namespaces/proxy-3627/pods/proxy-service-nqddh-mtgtl/proxy/: test (200; 3.26619ms) Nov 5 23:25:01.115: INFO: (12) /api/v1/namespaces/proxy-3627/pods/https:proxy-service-nqddh-mtgtl:460/proxy/: tls baz (200; 3.291066ms) Nov 5 23:25:01.115: INFO: (12) /api/v1/namespaces/proxy-3627/pods/http:proxy-service-nqddh-mtgtl:160/proxy/: foo (200; 3.347296ms) Nov 5 23:25:01.115: INFO: (12) /api/v1/namespaces/proxy-3627/services/proxy-service-nqddh:portname1/proxy/: foo (200; 3.522892ms) Nov 5 23:25:01.115: INFO: (12) /api/v1/namespaces/proxy-3627/pods/http:proxy-service-nqddh-mtgtl:162/proxy/: bar (200; 3.458196ms) Nov 5 23:25:01.115: INFO: (12) /api/v1/namespaces/proxy-3627/pods/https:proxy-service-nqddh-mtgtl:462/proxy/: tls qux (200; 3.388907ms) Nov 5 23:25:01.115: INFO: (12) /api/v1/namespaces/proxy-3627/services/http:proxy-service-nqddh:portname1/proxy/: foo (200; 3.802977ms) Nov 5 23:25:01.115: INFO: (12) /api/v1/namespaces/proxy-3627/services/https:proxy-service-nqddh:tlsportname2/proxy/: tls qux (200; 3.714151ms) Nov 5 23:25:01.115: INFO: (12) /api/v1/namespaces/proxy-3627/services/http:proxy-service-nqddh:portname2/proxy/: bar (200; 3.549689ms) Nov 5 23:25:01.117: INFO: (13) /api/v1/namespaces/proxy-3627/pods/https:proxy-service-nqddh-mtgtl:460/proxy/: tls baz (200; 2.058277ms) Nov 5 23:25:01.117: INFO: (13) /api/v1/namespaces/proxy-3627/pods/proxy-service-nqddh-mtgtl/proxy/: test (200; 2.088919ms) Nov 5 23:25:01.117: INFO: (13) /api/v1/namespaces/proxy-3627/pods/proxy-service-nqddh-mtgtl:1080/proxy/: test<... (200; 1.975067ms) Nov 5 23:25:01.118: INFO: (13) /api/v1/namespaces/proxy-3627/pods/proxy-service-nqddh-mtgtl:160/proxy/: foo (200; 2.658364ms) Nov 5 23:25:01.118: INFO: (13) /api/v1/namespaces/proxy-3627/pods/https:proxy-service-nqddh-mtgtl:443/proxy/: ... (200; 2.854207ms) Nov 5 23:25:01.118: INFO: (13) /api/v1/namespaces/proxy-3627/pods/https:proxy-service-nqddh-mtgtl:462/proxy/: tls qux (200; 2.870789ms) Nov 5 23:25:01.118: INFO: (13) /api/v1/namespaces/proxy-3627/pods/http:proxy-service-nqddh-mtgtl:162/proxy/: bar (200; 2.979357ms) Nov 5 23:25:01.119: INFO: (13) /api/v1/namespaces/proxy-3627/services/https:proxy-service-nqddh:tlsportname1/proxy/: tls baz (200; 3.420434ms) Nov 5 23:25:01.119: INFO: (13) /api/v1/namespaces/proxy-3627/pods/proxy-service-nqddh-mtgtl:162/proxy/: bar (200; 3.20457ms) Nov 5 23:25:01.119: INFO: (13) /api/v1/namespaces/proxy-3627/services/proxy-service-nqddh:portname1/proxy/: foo (200; 3.431936ms) Nov 5 23:25:01.119: INFO: (13) /api/v1/namespaces/proxy-3627/pods/http:proxy-service-nqddh-mtgtl:160/proxy/: foo (200; 3.274346ms) Nov 5 23:25:01.119: INFO: (13) /api/v1/namespaces/proxy-3627/services/http:proxy-service-nqddh:portname2/proxy/: bar (200; 3.441146ms) Nov 5 23:25:01.119: INFO: (13) /api/v1/namespaces/proxy-3627/services/https:proxy-service-nqddh:tlsportname2/proxy/: tls qux (200; 3.646374ms) Nov 5 23:25:01.119: INFO: (13) /api/v1/namespaces/proxy-3627/services/http:proxy-service-nqddh:portname1/proxy/: foo (200; 3.833694ms) Nov 5 23:25:01.119: INFO: (13) /api/v1/namespaces/proxy-3627/services/proxy-service-nqddh:portname2/proxy/: bar (200; 3.852958ms) Nov 5 23:25:01.121: INFO: (14) /api/v1/namespaces/proxy-3627/pods/proxy-service-nqddh-mtgtl:160/proxy/: foo (200; 2.215976ms) Nov 5 23:25:01.121: INFO: (14) /api/v1/namespaces/proxy-3627/pods/http:proxy-service-nqddh-mtgtl:1080/proxy/: ... (200; 2.220852ms) Nov 5 23:25:01.121: INFO: (14) /api/v1/namespaces/proxy-3627/pods/http:proxy-service-nqddh-mtgtl:160/proxy/: foo (200; 2.246351ms) Nov 5 23:25:01.121: INFO: (14) /api/v1/namespaces/proxy-3627/pods/proxy-service-nqddh-mtgtl:162/proxy/: bar (200; 2.140703ms) Nov 5 23:25:01.122: INFO: (14) /api/v1/namespaces/proxy-3627/pods/proxy-service-nqddh-mtgtl/proxy/: test (200; 2.40175ms) Nov 5 23:25:01.122: INFO: (14) /api/v1/namespaces/proxy-3627/pods/https:proxy-service-nqddh-mtgtl:460/proxy/: tls baz (200; 2.586506ms) Nov 5 23:25:01.122: INFO: (14) /api/v1/namespaces/proxy-3627/pods/https:proxy-service-nqddh-mtgtl:443/proxy/: test<... (200; 3.4341ms) Nov 5 23:25:01.123: INFO: (14) /api/v1/namespaces/proxy-3627/services/https:proxy-service-nqddh:tlsportname1/proxy/: tls baz (200; 3.620253ms) Nov 5 23:25:01.123: INFO: (14) /api/v1/namespaces/proxy-3627/services/proxy-service-nqddh:portname1/proxy/: foo (200; 3.698106ms) Nov 5 23:25:01.123: INFO: (14) /api/v1/namespaces/proxy-3627/services/https:proxy-service-nqddh:tlsportname2/proxy/: tls qux (200; 3.81162ms) Nov 5 23:25:01.123: INFO: (14) /api/v1/namespaces/proxy-3627/services/http:proxy-service-nqddh:portname1/proxy/: foo (200; 3.988652ms) Nov 5 23:25:01.123: INFO: (14) /api/v1/namespaces/proxy-3627/services/http:proxy-service-nqddh:portname2/proxy/: bar (200; 4.081405ms) Nov 5 23:25:01.126: INFO: (15) /api/v1/namespaces/proxy-3627/pods/http:proxy-service-nqddh-mtgtl:162/proxy/: bar (200; 1.986078ms) Nov 5 23:25:01.126: INFO: (15) /api/v1/namespaces/proxy-3627/pods/https:proxy-service-nqddh-mtgtl:462/proxy/: tls qux (200; 2.153707ms) Nov 5 23:25:01.126: INFO: (15) /api/v1/namespaces/proxy-3627/pods/http:proxy-service-nqddh-mtgtl:1080/proxy/: ... (200; 2.32811ms) Nov 5 23:25:01.126: INFO: (15) /api/v1/namespaces/proxy-3627/pods/proxy-service-nqddh-mtgtl:162/proxy/: bar (200; 2.430443ms) Nov 5 23:25:01.126: INFO: (15) /api/v1/namespaces/proxy-3627/pods/proxy-service-nqddh-mtgtl:160/proxy/: foo (200; 2.858093ms) Nov 5 23:25:01.126: INFO: (15) /api/v1/namespaces/proxy-3627/pods/proxy-service-nqddh-mtgtl/proxy/: test (200; 2.839116ms) Nov 5 23:25:01.126: INFO: (15) /api/v1/namespaces/proxy-3627/services/http:proxy-service-nqddh:portname2/proxy/: bar (200; 2.866818ms) Nov 5 23:25:01.126: INFO: (15) /api/v1/namespaces/proxy-3627/pods/https:proxy-service-nqddh-mtgtl:443/proxy/: test<... (200; 3.550721ms) Nov 5 23:25:01.127: INFO: (15) /api/v1/namespaces/proxy-3627/services/http:proxy-service-nqddh:portname1/proxy/: foo (200; 3.616979ms) Nov 5 23:25:01.127: INFO: (15) /api/v1/namespaces/proxy-3627/services/https:proxy-service-nqddh:tlsportname1/proxy/: tls baz (200; 3.794111ms) Nov 5 23:25:01.127: INFO: (15) /api/v1/namespaces/proxy-3627/pods/https:proxy-service-nqddh-mtgtl:460/proxy/: tls baz (200; 3.663227ms) Nov 5 23:25:01.128: INFO: (15) /api/v1/namespaces/proxy-3627/services/proxy-service-nqddh:portname2/proxy/: bar (200; 4.282333ms) Nov 5 23:25:01.130: INFO: (16) /api/v1/namespaces/proxy-3627/pods/http:proxy-service-nqddh-mtgtl:1080/proxy/: ... (200; 1.656721ms) Nov 5 23:25:01.130: INFO: (16) /api/v1/namespaces/proxy-3627/pods/proxy-service-nqddh-mtgtl:162/proxy/: bar (200; 1.694083ms) Nov 5 23:25:01.131: INFO: (16) /api/v1/namespaces/proxy-3627/pods/http:proxy-service-nqddh-mtgtl:160/proxy/: foo (200; 2.237302ms) Nov 5 23:25:01.131: INFO: (16) /api/v1/namespaces/proxy-3627/pods/http:proxy-service-nqddh-mtgtl:162/proxy/: bar (200; 2.531663ms) Nov 5 23:25:01.131: INFO: (16) /api/v1/namespaces/proxy-3627/services/http:proxy-service-nqddh:portname2/proxy/: bar (200; 2.669151ms) Nov 5 23:25:01.131: INFO: (16) /api/v1/namespaces/proxy-3627/pods/https:proxy-service-nqddh-mtgtl:443/proxy/: test (200; 2.925246ms) Nov 5 23:25:01.131: INFO: (16) /api/v1/namespaces/proxy-3627/pods/proxy-service-nqddh-mtgtl:160/proxy/: foo (200; 2.938219ms) Nov 5 23:25:01.131: INFO: (16) /api/v1/namespaces/proxy-3627/pods/https:proxy-service-nqddh-mtgtl:460/proxy/: tls baz (200; 2.896422ms) Nov 5 23:25:01.131: INFO: (16) /api/v1/namespaces/proxy-3627/pods/proxy-service-nqddh-mtgtl:1080/proxy/: test<... (200; 3.037815ms) Nov 5 23:25:01.131: INFO: (16) /api/v1/namespaces/proxy-3627/pods/https:proxy-service-nqddh-mtgtl:462/proxy/: tls qux (200; 3.11443ms) Nov 5 23:25:01.132: INFO: (16) /api/v1/namespaces/proxy-3627/services/http:proxy-service-nqddh:portname1/proxy/: foo (200; 3.408168ms) Nov 5 23:25:01.132: INFO: (16) /api/v1/namespaces/proxy-3627/services/proxy-service-nqddh:portname2/proxy/: bar (200; 3.427084ms) Nov 5 23:25:01.132: INFO: (16) /api/v1/namespaces/proxy-3627/services/https:proxy-service-nqddh:tlsportname2/proxy/: tls qux (200; 3.786389ms) Nov 5 23:25:01.134: INFO: (17) /api/v1/namespaces/proxy-3627/pods/https:proxy-service-nqddh-mtgtl:462/proxy/: tls qux (200; 1.88359ms) Nov 5 23:25:01.134: INFO: (17) /api/v1/namespaces/proxy-3627/pods/proxy-service-nqddh-mtgtl/proxy/: test (200; 1.866472ms) Nov 5 23:25:01.134: INFO: (17) /api/v1/namespaces/proxy-3627/pods/http:proxy-service-nqddh-mtgtl:162/proxy/: bar (200; 2.105624ms) Nov 5 23:25:01.134: INFO: (17) /api/v1/namespaces/proxy-3627/pods/proxy-service-nqddh-mtgtl:162/proxy/: bar (200; 2.376016ms) Nov 5 23:25:01.135: INFO: (17) /api/v1/namespaces/proxy-3627/pods/https:proxy-service-nqddh-mtgtl:460/proxy/: tls baz (200; 2.331099ms) Nov 5 23:25:01.135: INFO: (17) /api/v1/namespaces/proxy-3627/pods/https:proxy-service-nqddh-mtgtl:443/proxy/: ... (200; 2.838612ms) Nov 5 23:25:01.135: INFO: (17) /api/v1/namespaces/proxy-3627/pods/proxy-service-nqddh-mtgtl:1080/proxy/: test<... (200; 2.727846ms) Nov 5 23:25:01.135: INFO: (17) /api/v1/namespaces/proxy-3627/services/http:proxy-service-nqddh:portname2/proxy/: bar (200; 2.762998ms) Nov 5 23:25:01.135: INFO: (17) /api/v1/namespaces/proxy-3627/pods/proxy-service-nqddh-mtgtl:160/proxy/: foo (200; 3.088592ms) Nov 5 23:25:01.136: INFO: (17) /api/v1/namespaces/proxy-3627/services/http:proxy-service-nqddh:portname1/proxy/: foo (200; 3.321452ms) Nov 5 23:25:01.136: INFO: (17) /api/v1/namespaces/proxy-3627/services/proxy-service-nqddh:portname2/proxy/: bar (200; 3.488149ms) Nov 5 23:25:01.136: INFO: (17) /api/v1/namespaces/proxy-3627/pods/http:proxy-service-nqddh-mtgtl:160/proxy/: foo (200; 3.42857ms) Nov 5 23:25:01.136: INFO: (17) /api/v1/namespaces/proxy-3627/services/proxy-service-nqddh:portname1/proxy/: foo (200; 3.83632ms) Nov 5 23:25:01.136: INFO: (17) /api/v1/namespaces/proxy-3627/services/https:proxy-service-nqddh:tlsportname2/proxy/: tls qux (200; 3.937068ms) Nov 5 23:25:01.136: INFO: (17) /api/v1/namespaces/proxy-3627/services/https:proxy-service-nqddh:tlsportname1/proxy/: tls baz (200; 4.199532ms) Nov 5 23:25:01.139: INFO: (18) /api/v1/namespaces/proxy-3627/pods/proxy-service-nqddh-mtgtl/proxy/: test (200; 2.065454ms) Nov 5 23:25:01.139: INFO: (18) /api/v1/namespaces/proxy-3627/pods/https:proxy-service-nqddh-mtgtl:462/proxy/: tls qux (200; 2.137038ms) Nov 5 23:25:01.139: INFO: (18) /api/v1/namespaces/proxy-3627/pods/https:proxy-service-nqddh-mtgtl:443/proxy/: ... (200; 2.87581ms) Nov 5 23:25:01.140: INFO: (18) /api/v1/namespaces/proxy-3627/pods/http:proxy-service-nqddh-mtgtl:160/proxy/: foo (200; 3.009859ms) Nov 5 23:25:01.140: INFO: (18) /api/v1/namespaces/proxy-3627/pods/https:proxy-service-nqddh-mtgtl:460/proxy/: tls baz (200; 2.85364ms) Nov 5 23:25:01.140: INFO: (18) /api/v1/namespaces/proxy-3627/services/http:proxy-service-nqddh:portname2/proxy/: bar (200; 3.189459ms) Nov 5 23:25:01.140: INFO: (18) /api/v1/namespaces/proxy-3627/pods/proxy-service-nqddh-mtgtl:160/proxy/: foo (200; 3.155427ms) Nov 5 23:25:01.140: INFO: (18) /api/v1/namespaces/proxy-3627/pods/http:proxy-service-nqddh-mtgtl:162/proxy/: bar (200; 3.009585ms) Nov 5 23:25:01.140: INFO: (18) /api/v1/namespaces/proxy-3627/pods/proxy-service-nqddh-mtgtl:162/proxy/: bar (200; 3.072454ms) Nov 5 23:25:01.140: INFO: (18) /api/v1/namespaces/proxy-3627/services/proxy-service-nqddh:portname1/proxy/: foo (200; 3.31613ms) Nov 5 23:25:01.140: INFO: (18) /api/v1/namespaces/proxy-3627/services/https:proxy-service-nqddh:tlsportname2/proxy/: tls qux (200; 3.654415ms) Nov 5 23:25:01.140: INFO: (18) /api/v1/namespaces/proxy-3627/pods/proxy-service-nqddh-mtgtl:1080/proxy/: test<... (200; 3.743933ms) Nov 5 23:25:01.141: INFO: (18) /api/v1/namespaces/proxy-3627/services/proxy-service-nqddh:portname2/proxy/: bar (200; 4.058066ms) Nov 5 23:25:01.141: INFO: (18) /api/v1/namespaces/proxy-3627/services/http:proxy-service-nqddh:portname1/proxy/: foo (200; 4.23495ms) Nov 5 23:25:01.141: INFO: (18) /api/v1/namespaces/proxy-3627/services/https:proxy-service-nqddh:tlsportname1/proxy/: tls baz (200; 4.8067ms) Nov 5 23:25:01.143: INFO: (19) /api/v1/namespaces/proxy-3627/pods/http:proxy-service-nqddh-mtgtl:1080/proxy/: ... (200; 1.906881ms) Nov 5 23:25:01.143: INFO: (19) /api/v1/namespaces/proxy-3627/pods/proxy-service-nqddh-mtgtl/proxy/: test (200; 1.83558ms) Nov 5 23:25:01.144: INFO: (19) /api/v1/namespaces/proxy-3627/pods/https:proxy-service-nqddh-mtgtl:460/proxy/: tls baz (200; 2.071856ms) Nov 5 23:25:01.144: INFO: (19) /api/v1/namespaces/proxy-3627/pods/http:proxy-service-nqddh-mtgtl:160/proxy/: foo (200; 2.328475ms) Nov 5 23:25:01.144: INFO: (19) /api/v1/namespaces/proxy-3627/pods/https:proxy-service-nqddh-mtgtl:443/proxy/: test<... (200; 2.956929ms) Nov 5 23:25:01.145: INFO: (19) /api/v1/namespaces/proxy-3627/services/http:proxy-service-nqddh:portname1/proxy/: foo (200; 2.969904ms) Nov 5 23:25:01.145: INFO: (19) /api/v1/namespaces/proxy-3627/services/proxy-service-nqddh:portname1/proxy/: foo (200; 3.408339ms) Nov 5 23:25:01.145: INFO: (19) /api/v1/namespaces/proxy-3627/pods/proxy-service-nqddh-mtgtl:160/proxy/: foo (200; 3.474237ms) Nov 5 23:25:01.145: INFO: (19) /api/v1/namespaces/proxy-3627/services/http:proxy-service-nqddh:portname2/proxy/: bar (200; 3.579679ms) Nov 5 23:25:01.145: INFO: (19) /api/v1/namespaces/proxy-3627/services/proxy-service-nqddh:portname2/proxy/: bar (200; 3.654745ms) Nov 5 23:25:01.146: INFO: (19) /api/v1/namespaces/proxy-3627/services/https:proxy-service-nqddh:tlsportname1/proxy/: tls baz (200; 3.987792ms) Nov 5 23:25:01.146: INFO: (19) /api/v1/namespaces/proxy-3627/services/https:proxy-service-nqddh:tlsportname2/proxy/: tls qux (200; 4.136743ms) STEP: deleting ReplicationController proxy-service-nqddh in namespace proxy-3627, will wait for the garbage collector to delete the pods Nov 5 23:25:01.204: INFO: Deleting ReplicationController proxy-service-nqddh took: 4.134185ms Nov 5 23:25:01.304: INFO: Terminating ReplicationController proxy-service-nqddh pods took: 100.651303ms [AfterEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:25:03.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-3627" for this suite. • [SLOW TEST:12.669 seconds] [sig-network] Proxy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:74 should proxy through a service and a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":-1,"completed":17,"skipped":228,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:24:34.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: referencing a single matching pod STEP: referencing matching pods with named port STEP: creating empty Endpoints and EndpointSlices for no matching Pods STEP: recreating EndpointSlices after they've been deleted Nov 5 23:24:59.282: INFO: EndpointSlice for Service endpointslice-1380/example-named-port not found [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:25:09.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-1380" for this suite. • [SLOW TEST:35.120 seconds] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","total":-1,"completed":10,"skipped":182,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:24:37.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 5 23:24:37.114: INFO: created pod Nov 5 23:24:37.114: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-6025" to be "Succeeded or Failed" Nov 5 23:24:37.118: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 3.764676ms Nov 5 23:24:39.122: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007585482s Nov 5 23:24:41.126: INFO: Pod "oidc-discovery-validator": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011387727s STEP: Saw pod success Nov 5 23:24:41.126: INFO: Pod "oidc-discovery-validator" satisfied condition "Succeeded or Failed" Nov 5 23:25:11.127: INFO: polling logs Nov 5 23:25:11.144: INFO: Pod logs: 2021/11/05 23:24:39 OK: Got token 2021/11/05 23:24:39 validating with in-cluster discovery 2021/11/05 23:24:39 OK: got issuer https://kubernetes.default.svc.cluster.local 2021/11/05 23:24:39 Full, not-validated claims: openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc.cluster.local", Subject:"system:serviceaccount:svcaccounts-6025:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1636155277, NotBefore:1636154677, IssuedAt:1636154677, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-6025", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"afa8173b-b95d-4fda-a994-b86cd053fb23"}}} 2021/11/05 23:24:39 OK: Constructed OIDC provider for issuer https://kubernetes.default.svc.cluster.local 2021/11/05 23:24:39 OK: Validated signature on JWT 2021/11/05 23:24:39 OK: Got valid claims from token! 2021/11/05 23:24:39 Full, validated claims: &openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc.cluster.local", Subject:"system:serviceaccount:svcaccounts-6025:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1636155277, NotBefore:1636154677, IssuedAt:1636154677, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-6025", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"afa8173b-b95d-4fda-a994-b86cd053fb23"}}} Nov 5 23:25:11.144: INFO: completed pod [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:25:11.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-6025" for this suite. • [SLOW TEST:34.081 seconds] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","total":-1,"completed":21,"skipped":483,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:25:09.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 5 23:25:09.394: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-b474e061-7e73-42b6-99b8-129bcbdd178e" in namespace "security-context-test-9990" to be "Succeeded or Failed" Nov 5 23:25:09.400: INFO: Pod "busybox-readonly-false-b474e061-7e73-42b6-99b8-129bcbdd178e": Phase="Pending", Reason="", readiness=false. Elapsed: 5.523801ms Nov 5 23:25:11.404: INFO: Pod "busybox-readonly-false-b474e061-7e73-42b6-99b8-129bcbdd178e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009760047s Nov 5 23:25:13.408: INFO: Pod "busybox-readonly-false-b474e061-7e73-42b6-99b8-129bcbdd178e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013734857s Nov 5 23:25:13.408: INFO: Pod "busybox-readonly-false-b474e061-7e73-42b6-99b8-129bcbdd178e" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:25:13.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9990" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":220,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:25:03.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 5 23:25:03.662: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Nov 5 23:25:11.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4764 --namespace=crd-publish-openapi-4764 create -f -' Nov 5 23:25:12.209: INFO: stderr: "" Nov 5 23:25:12.209: INFO: stdout: "e2e-test-crd-publish-openapi-9855-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Nov 5 23:25:12.209: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4764 --namespace=crd-publish-openapi-4764 delete e2e-test-crd-publish-openapi-9855-crds test-cr' Nov 5 23:25:12.387: INFO: stderr: "" Nov 5 23:25:12.387: INFO: stdout: "e2e-test-crd-publish-openapi-9855-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Nov 5 23:25:12.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4764 --namespace=crd-publish-openapi-4764 apply -f -' Nov 5 23:25:12.703: INFO: stderr: "" Nov 5 23:25:12.703: INFO: stdout: "e2e-test-crd-publish-openapi-9855-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Nov 5 23:25:12.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4764 --namespace=crd-publish-openapi-4764 delete e2e-test-crd-publish-openapi-9855-crds test-cr' Nov 5 23:25:12.872: INFO: stderr: "" Nov 5 23:25:12.872: INFO: stdout: "e2e-test-crd-publish-openapi-9855-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Nov 5 23:25:12.872: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4764 explain e2e-test-crd-publish-openapi-9855-crds' Nov 5 23:25:13.194: INFO: stderr: "" Nov 5 23:25:13.195: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9855-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:25:16.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4764" for this suite. • [SLOW TEST:13.141 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":-1,"completed":18,"skipped":234,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:25:11.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Nov 5 23:25:11.201: INFO: Pod name pod-release: Found 0 pods out of 1 Nov 5 23:25:16.205: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:25:17.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8912" for this suite. • [SLOW TEST:6.056 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":-1,"completed":22,"skipped":494,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:25:16.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should guarantee kube-root-ca.crt exist in any namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 5 23:25:16.817: INFO: Got root ca configmap in namespace "svcaccounts-8200" Nov 5 23:25:16.820: INFO: Deleted root ca configmap in namespace "svcaccounts-8200" STEP: waiting for a new root ca configmap created Nov 5 23:25:17.323: INFO: Recreated root ca configmap in namespace "svcaccounts-8200" Nov 5 23:25:17.326: INFO: Updated root ca configmap in namespace "svcaccounts-8200" STEP: waiting for the root ca configmap reconciled Nov 5 23:25:17.829: INFO: Reconciled root ca configmap in namespace "svcaccounts-8200" [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:25:17.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-8200" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]","total":-1,"completed":19,"skipped":244,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:25:17.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test substitution in volume subpath Nov 5 23:25:17.273: INFO: Waiting up to 5m0s for pod "var-expansion-ec962866-e916-4d20-b65e-668498741415" in namespace "var-expansion-7969" to be "Succeeded or Failed" Nov 5 23:25:17.277: INFO: Pod "var-expansion-ec962866-e916-4d20-b65e-668498741415": Phase="Pending", Reason="", readiness=false. Elapsed: 3.608068ms Nov 5 23:25:19.280: INFO: Pod "var-expansion-ec962866-e916-4d20-b65e-668498741415": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006244063s Nov 5 23:25:21.283: INFO: Pod "var-expansion-ec962866-e916-4d20-b65e-668498741415": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010165062s STEP: Saw pod success Nov 5 23:25:21.284: INFO: Pod "var-expansion-ec962866-e916-4d20-b65e-668498741415" satisfied condition "Succeeded or Failed" Nov 5 23:25:21.286: INFO: Trying to get logs from node node2 pod var-expansion-ec962866-e916-4d20-b65e-668498741415 container dapi-container: STEP: delete the pod Nov 5 23:25:21.372: INFO: Waiting for pod var-expansion-ec962866-e916-4d20-b65e-668498741415 to disappear Nov 5 23:25:21.375: INFO: Pod var-expansion-ec962866-e916-4d20-b65e-668498741415 no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:25:21.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7969" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]","total":-1,"completed":23,"skipped":499,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:25:13.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Creating a NodePort Service STEP: Not allowing a LoadBalancer Service with NodePort to be created that exceeds remaining quota STEP: Ensuring resource quota status captures service creation STEP: Deleting Services STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:25:24.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4090" for this suite. • [SLOW TEST:11.101 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":-1,"completed":12,"skipped":237,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSS ------------------------------ [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:25:21.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test override arguments Nov 5 23:25:21.444: INFO: Waiting up to 5m0s for pod "client-containers-c17b4d6e-fcfe-4200-acfa-d71e5a3bd6fb" in namespace "containers-2884" to be "Succeeded or Failed" Nov 5 23:25:21.448: INFO: Pod "client-containers-c17b4d6e-fcfe-4200-acfa-d71e5a3bd6fb": Phase="Pending", Reason="", readiness=false. Elapsed: 3.586114ms Nov 5 23:25:23.452: INFO: Pod "client-containers-c17b4d6e-fcfe-4200-acfa-d71e5a3bd6fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007610763s Nov 5 23:25:25.455: INFO: Pod "client-containers-c17b4d6e-fcfe-4200-acfa-d71e5a3bd6fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010536163s STEP: Saw pod success Nov 5 23:25:25.455: INFO: Pod "client-containers-c17b4d6e-fcfe-4200-acfa-d71e5a3bd6fb" satisfied condition "Succeeded or Failed" Nov 5 23:25:25.457: INFO: Trying to get logs from node node1 pod client-containers-c17b4d6e-fcfe-4200-acfa-d71e5a3bd6fb container agnhost-container: STEP: delete the pod Nov 5 23:25:25.467: INFO: Waiting for pod client-containers-c17b4d6e-fcfe-4200-acfa-d71e5a3bd6fb to disappear Nov 5 23:25:25.470: INFO: Pod client-containers-c17b4d6e-fcfe-4200-acfa-d71e5a3bd6fb no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:25:25.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2884" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":514,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:25:25.519: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test override command Nov 5 23:25:25.552: INFO: Waiting up to 5m0s for pod "client-containers-2245715e-a494-4bc2-ab63-09827fe670ff" in namespace "containers-3799" to be "Succeeded or Failed" Nov 5 23:25:25.555: INFO: Pod "client-containers-2245715e-a494-4bc2-ab63-09827fe670ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.734132ms Nov 5 23:25:27.558: INFO: Pod "client-containers-2245715e-a494-4bc2-ab63-09827fe670ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0058762s Nov 5 23:25:29.561: INFO: Pod "client-containers-2245715e-a494-4bc2-ab63-09827fe670ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008346878s STEP: Saw pod success Nov 5 23:25:29.561: INFO: Pod "client-containers-2245715e-a494-4bc2-ab63-09827fe670ff" satisfied condition "Succeeded or Failed" Nov 5 23:25:29.563: INFO: Trying to get logs from node node2 pod client-containers-2245715e-a494-4bc2-ab63-09827fe670ff container agnhost-container: STEP: delete the pod Nov 5 23:25:29.575: INFO: Waiting for pod client-containers-2245715e-a494-4bc2-ab63-09827fe670ff to disappear Nov 5 23:25:29.577: INFO: Pod client-containers-2245715e-a494-4bc2-ab63-09827fe670ff no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:25:29.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3799" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":533,"failed":0} SSS ------------------------------ [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:25:29.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:25:29.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-463" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":-1,"completed":26,"skipped":536,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:25:24.569: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Nov 5 23:25:24.770: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Nov 5 23:25:26.778: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751524, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751524, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751524, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751524, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 5 23:25:29.791: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:25:30.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7850" for this suite. STEP: Destroying namespace "webhook-7850-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.362 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":-1,"completed":13,"skipped":244,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:25:29.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Nov 5 23:25:29.714: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8cebe568-7d2d-4de5-b30a-3692c59ebd9e" in namespace "projected-1347" to be "Succeeded or Failed" Nov 5 23:25:29.718: INFO: Pod "downwardapi-volume-8cebe568-7d2d-4de5-b30a-3692c59ebd9e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.568293ms Nov 5 23:25:31.722: INFO: Pod "downwardapi-volume-8cebe568-7d2d-4de5-b30a-3692c59ebd9e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007374323s Nov 5 23:25:33.725: INFO: Pod "downwardapi-volume-8cebe568-7d2d-4de5-b30a-3692c59ebd9e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010337386s STEP: Saw pod success Nov 5 23:25:33.725: INFO: Pod "downwardapi-volume-8cebe568-7d2d-4de5-b30a-3692c59ebd9e" satisfied condition "Succeeded or Failed" Nov 5 23:25:33.728: INFO: Trying to get logs from node node1 pod downwardapi-volume-8cebe568-7d2d-4de5-b30a-3692c59ebd9e container client-container: STEP: delete the pod Nov 5 23:25:33.740: INFO: Waiting for pod downwardapi-volume-8cebe568-7d2d-4de5-b30a-3692c59ebd9e to disappear Nov 5 23:25:33.742: INFO: Pod downwardapi-volume-8cebe568-7d2d-4de5-b30a-3692c59ebd9e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:25:33.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1347" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":553,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:24:22.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W1105 23:24:32.610492 24 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 5 23:25:34.629: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. Nov 5 23:25:34.629: INFO: Deleting pod "simpletest-rc-to-be-deleted-4qzn4" in namespace "gc-7125" Nov 5 23:25:34.636: INFO: Deleting pod "simpletest-rc-to-be-deleted-9tv7f" in namespace "gc-7125" Nov 5 23:25:34.641: INFO: Deleting pod "simpletest-rc-to-be-deleted-fxcd2" in namespace "gc-7125" Nov 5 23:25:34.647: INFO: Deleting pod "simpletest-rc-to-be-deleted-hsqfd" in namespace "gc-7125" Nov 5 23:25:34.652: INFO: Deleting pod "simpletest-rc-to-be-deleted-khb46" in namespace "gc-7125" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:25:34.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7125" for this suite. • [SLOW TEST:72.146 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":-1,"completed":9,"skipped":108,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:25:34.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Nov 5 23:25:37.737: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:25:37.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2535" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":118,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:25:30.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should adopt matching pods on creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Given a Pod with a 'name' label pod-adoption is created Nov 5 23:25:30.998: INFO: The status of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:25:33.001: INFO: The status of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:25:35.001: INFO: The status of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:25:37.004: INFO: The status of Pod pod-adoption is Running (Ready = true) STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:25:38.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8917" for this suite. • [SLOW TEST:7.060 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":-1,"completed":14,"skipped":261,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:25:17.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-5295 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-5295 STEP: creating replication controller externalsvc in namespace services-5295 I1105 23:25:17.892697 26 runners.go:190] Created replication controller with name: externalsvc, namespace: services-5295, replica count: 2 I1105 23:25:20.942936 26 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1105 23:25:23.943696 26 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Nov 5 23:25:23.954: INFO: Creating new exec pod Nov 5 23:25:27.972: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5295 exec execpodvnwtv -- /bin/sh -x -c nslookup clusterip-service.services-5295.svc.cluster.local' Nov 5 23:25:28.279: INFO: stderr: "+ nslookup clusterip-service.services-5295.svc.cluster.local\n" Nov 5 23:25:28.279: INFO: stdout: "Server:\t\t10.233.0.3\nAddress:\t10.233.0.3#53\n\nclusterip-service.services-5295.svc.cluster.local\tcanonical name = externalsvc.services-5295.svc.cluster.local.\nName:\texternalsvc.services-5295.svc.cluster.local\nAddress: 10.233.3.218\n\n" STEP: deleting ReplicationController externalsvc in namespace services-5295, will wait for the garbage collector to delete the pods Nov 5 23:25:28.338: INFO: Deleting ReplicationController externalsvc took: 4.481726ms Nov 5 23:25:28.439: INFO: Terminating ReplicationController externalsvc pods took: 100.681681ms Nov 5 23:25:38.848: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:25:38.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5295" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:21.006 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:25:33.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Nov 5 23:25:41.816: INFO: &Pod{ObjectMeta:{send-events-437a9b26-3781-44fc-afc4-5d756cc2afb6 events-2801 d3394325-46ac-44df-b288-1376a71795ff 43188 0 2021-11-05 23:25:33 +0000 UTC map[name:foo time:794920671] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.31" ], "mac": "02:d2:eb:cc:69:6e", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.31" ], "mac": "02:d2:eb:cc:69:6e", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [] [] [{e2e.test Update v1 2021-11-05 23:25:33 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-11-05 23:25:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-11-05 23:25:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.31\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-qrwv2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:p,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qrwv2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:25:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:25:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:25:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:25:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.31,StartTime:2021-11-05 23:25:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-11-05 23:25:38 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:docker://a16260f0df3dca54e608818ffe88cf14cda7d135305f82d98ef7cbe667e9cc5f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.31,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Nov 5 23:25:43.821: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Nov 5 23:25:45.825: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:25:45.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-2801" for this suite. • [SLOW TEST:12.066 seconds] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":-1,"completed":28,"skipped":562,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:25:38.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-a0ca2f23-e5f9-479e-829d-365fd7da37e6 STEP: Creating a pod to test consume secrets Nov 5 23:25:38.085: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8cd54649-2a50-42f8-8321-97adb386acd4" in namespace "projected-1620" to be "Succeeded or Failed" Nov 5 23:25:38.087: INFO: Pod "pod-projected-secrets-8cd54649-2a50-42f8-8321-97adb386acd4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.562028ms Nov 5 23:25:40.091: INFO: Pod "pod-projected-secrets-8cd54649-2a50-42f8-8321-97adb386acd4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006504764s Nov 5 23:25:42.097: INFO: Pod "pod-projected-secrets-8cd54649-2a50-42f8-8321-97adb386acd4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012201769s Nov 5 23:25:44.101: INFO: Pod "pod-projected-secrets-8cd54649-2a50-42f8-8321-97adb386acd4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016488736s Nov 5 23:25:46.106: INFO: Pod "pod-projected-secrets-8cd54649-2a50-42f8-8321-97adb386acd4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.020929584s Nov 5 23:25:48.110: INFO: Pod "pod-projected-secrets-8cd54649-2a50-42f8-8321-97adb386acd4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.025291904s STEP: Saw pod success Nov 5 23:25:48.110: INFO: Pod "pod-projected-secrets-8cd54649-2a50-42f8-8321-97adb386acd4" satisfied condition "Succeeded or Failed" Nov 5 23:25:48.113: INFO: Trying to get logs from node node1 pod pod-projected-secrets-8cd54649-2a50-42f8-8321-97adb386acd4 container projected-secret-volume-test: STEP: delete the pod Nov 5 23:25:48.127: INFO: Waiting for pod pod-projected-secrets-8cd54649-2a50-42f8-8321-97adb386acd4 to disappear Nov 5 23:25:48.128: INFO: Pod pod-projected-secrets-8cd54649-2a50-42f8-8321-97adb386acd4 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:25:48.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1620" for this suite. • [SLOW TEST:10.086 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":274,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":-1,"completed":20,"skipped":252,"failed":0} [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:25:38.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Nov 5 23:25:51.944: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:25:51.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5673" for this suite. • [SLOW TEST:13.098 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":252,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:25:37.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:293 [It] should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a replication controller Nov 5 23:25:37.807: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6560 create -f -' Nov 5 23:25:38.170: INFO: stderr: "" Nov 5 23:25:38.170: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Nov 5 23:25:38.170: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6560 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 5 23:25:38.347: INFO: stderr: "" Nov 5 23:25:38.347: INFO: stdout: "update-demo-nautilus-2lx9z update-demo-nautilus-5vf7r " Nov 5 23:25:38.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6560 get pods update-demo-nautilus-2lx9z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 5 23:25:38.497: INFO: stderr: "" Nov 5 23:25:38.497: INFO: stdout: "" Nov 5 23:25:38.497: INFO: update-demo-nautilus-2lx9z is created but not running Nov 5 23:25:43.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6560 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 5 23:25:43.671: INFO: stderr: "" Nov 5 23:25:43.671: INFO: stdout: "update-demo-nautilus-2lx9z update-demo-nautilus-5vf7r " Nov 5 23:25:43.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6560 get pods update-demo-nautilus-2lx9z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 5 23:25:43.821: INFO: stderr: "" Nov 5 23:25:43.821: INFO: stdout: "" Nov 5 23:25:43.821: INFO: update-demo-nautilus-2lx9z is created but not running Nov 5 23:25:48.821: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6560 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 5 23:25:49.003: INFO: stderr: "" Nov 5 23:25:49.003: INFO: stdout: "update-demo-nautilus-2lx9z update-demo-nautilus-5vf7r " Nov 5 23:25:49.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6560 get pods update-demo-nautilus-2lx9z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 5 23:25:49.164: INFO: stderr: "" Nov 5 23:25:49.164: INFO: stdout: "" Nov 5 23:25:49.164: INFO: update-demo-nautilus-2lx9z is created but not running Nov 5 23:25:54.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6560 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 5 23:25:54.349: INFO: stderr: "" Nov 5 23:25:54.349: INFO: stdout: "update-demo-nautilus-2lx9z update-demo-nautilus-5vf7r " Nov 5 23:25:54.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6560 get pods update-demo-nautilus-2lx9z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 5 23:25:54.523: INFO: stderr: "" Nov 5 23:25:54.523: INFO: stdout: "true" Nov 5 23:25:54.523: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6560 get pods update-demo-nautilus-2lx9z -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Nov 5 23:25:54.686: INFO: stderr: "" Nov 5 23:25:54.686: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Nov 5 23:25:54.686: INFO: validating pod update-demo-nautilus-2lx9z Nov 5 23:25:54.690: INFO: got data: { "image": "nautilus.jpg" } Nov 5 23:25:54.690: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Nov 5 23:25:54.690: INFO: update-demo-nautilus-2lx9z is verified up and running Nov 5 23:25:54.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6560 get pods update-demo-nautilus-5vf7r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 5 23:25:54.844: INFO: stderr: "" Nov 5 23:25:54.844: INFO: stdout: "true" Nov 5 23:25:54.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6560 get pods update-demo-nautilus-5vf7r -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Nov 5 23:25:55.019: INFO: stderr: "" Nov 5 23:25:55.019: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Nov 5 23:25:55.019: INFO: validating pod update-demo-nautilus-5vf7r Nov 5 23:25:55.025: INFO: got data: { "image": "nautilus.jpg" } Nov 5 23:25:55.025: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Nov 5 23:25:55.025: INFO: update-demo-nautilus-5vf7r is verified up and running STEP: using delete to clean up resources Nov 5 23:25:55.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6560 delete --grace-period=0 --force -f -' Nov 5 23:25:55.161: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Nov 5 23:25:55.161: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Nov 5 23:25:55.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6560 get rc,svc -l name=update-demo --no-headers' Nov 5 23:25:55.373: INFO: stderr: "No resources found in kubectl-6560 namespace.\n" Nov 5 23:25:55.374: INFO: stdout: "" Nov 5 23:25:55.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6560 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Nov 5 23:25:55.555: INFO: stderr: "" Nov 5 23:25:55.555: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:25:55.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6560" for this suite. • [SLOW TEST:17.781 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:291 should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":-1,"completed":11,"skipped":131,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:25:48.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name projected-secret-test-3ed64f8e-b6dd-4adf-8730-93a7966cc447 STEP: Creating a pod to test consume secrets Nov 5 23:25:48.273: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-03814c88-a77f-4217-80ed-e2d21c992451" in namespace "projected-3277" to be "Succeeded or Failed" Nov 5 23:25:48.276: INFO: Pod "pod-projected-secrets-03814c88-a77f-4217-80ed-e2d21c992451": Phase="Pending", Reason="", readiness=false. Elapsed: 2.661629ms Nov 5 23:25:50.279: INFO: Pod "pod-projected-secrets-03814c88-a77f-4217-80ed-e2d21c992451": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006136855s Nov 5 23:25:52.285: INFO: Pod "pod-projected-secrets-03814c88-a77f-4217-80ed-e2d21c992451": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011910113s Nov 5 23:25:54.290: INFO: Pod "pod-projected-secrets-03814c88-a77f-4217-80ed-e2d21c992451": Phase="Pending", Reason="", readiness=false. Elapsed: 6.017173481s Nov 5 23:25:56.295: INFO: Pod "pod-projected-secrets-03814c88-a77f-4217-80ed-e2d21c992451": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.02150814s STEP: Saw pod success Nov 5 23:25:56.295: INFO: Pod "pod-projected-secrets-03814c88-a77f-4217-80ed-e2d21c992451" satisfied condition "Succeeded or Failed" Nov 5 23:25:56.297: INFO: Trying to get logs from node node2 pod pod-projected-secrets-03814c88-a77f-4217-80ed-e2d21c992451 container secret-volume-test: STEP: delete the pod Nov 5 23:25:56.310: INFO: Waiting for pod pod-projected-secrets-03814c88-a77f-4217-80ed-e2d21c992451 to disappear Nov 5 23:25:56.312: INFO: Pod pod-projected-secrets-03814c88-a77f-4217-80ed-e2d21c992451 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:25:56.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3277" for this suite. • [SLOW TEST:8.121 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":308,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:25:52.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Nov 5 23:25:52.048: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c9630820-405e-4331-b6e9-3c28d0fdc113" in namespace "downward-api-6559" to be "Succeeded or Failed" Nov 5 23:25:52.053: INFO: Pod "downwardapi-volume-c9630820-405e-4331-b6e9-3c28d0fdc113": Phase="Pending", Reason="", readiness=false. Elapsed: 5.495645ms Nov 5 23:25:54.057: INFO: Pod "downwardapi-volume-c9630820-405e-4331-b6e9-3c28d0fdc113": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008943812s Nov 5 23:25:56.060: INFO: Pod "downwardapi-volume-c9630820-405e-4331-b6e9-3c28d0fdc113": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012027893s Nov 5 23:25:58.063: INFO: Pod "downwardapi-volume-c9630820-405e-4331-b6e9-3c28d0fdc113": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015162153s STEP: Saw pod success Nov 5 23:25:58.063: INFO: Pod "downwardapi-volume-c9630820-405e-4331-b6e9-3c28d0fdc113" satisfied condition "Succeeded or Failed" Nov 5 23:25:58.066: INFO: Trying to get logs from node node2 pod downwardapi-volume-c9630820-405e-4331-b6e9-3c28d0fdc113 container client-container: STEP: delete the pod Nov 5 23:25:58.083: INFO: Waiting for pod downwardapi-volume-c9630820-405e-4331-b6e9-3c28d0fdc113 to disappear Nov 5 23:25:58.085: INFO: Pod downwardapi-volume-c9630820-405e-4331-b6e9-3c28d0fdc113 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:25:58.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6559" for this suite. • [SLOW TEST:6.080 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":276,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:23:44.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a service externalname-service with the type=ExternalName in namespace services-2332 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-2332 I1105 23:23:44.723601 32 runners.go:190] Created replication controller with name: externalname-service, namespace: services-2332, replica count: 2 I1105 23:23:47.775627 32 runners.go:190] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1105 23:23:50.778155 32 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 5 23:23:50.778: INFO: Creating new exec pod Nov 5 23:23:55.805: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:23:56.584: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:23:56.584: INFO: stdout: "" Nov 5 23:23:57.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:23:57.845: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:23:57.845: INFO: stdout: "" Nov 5 23:23:58.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:23:58.899: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:23:58.899: INFO: stdout: "" Nov 5 23:23:59.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:23:59.815: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:23:59.815: INFO: stdout: "" Nov 5 23:24:00.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:24:00.818: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:24:00.818: INFO: stdout: "" Nov 5 23:24:01.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:24:01.817: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:24:01.817: INFO: stdout: "" Nov 5 23:24:02.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:24:02.818: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:24:02.818: INFO: stdout: "" Nov 5 23:24:03.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:24:03.824: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:24:03.824: INFO: stdout: "" Nov 5 23:24:04.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:24:04.802: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:24:04.802: INFO: stdout: "" Nov 5 23:24:05.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:24:05.823: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:24:05.823: INFO: stdout: "" Nov 5 23:24:06.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:24:06.858: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:24:06.858: INFO: stdout: "" Nov 5 23:24:07.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:24:08.999: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:24:09.000: INFO: stdout: "" Nov 5 23:24:09.587: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:24:09.860: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:24:09.860: INFO: stdout: "" Nov 5 23:24:10.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:24:10.934: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:24:10.934: INFO: stdout: "" Nov 5 23:24:11.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:24:11.838: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:24:11.838: INFO: stdout: "" Nov 5 23:24:12.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:24:12.848: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:24:12.848: INFO: stdout: "" Nov 5 23:24:13.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:24:14.235: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:24:14.235: INFO: stdout: "" Nov 5 23:24:14.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:24:14.813: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:24:14.813: INFO: stdout: "" Nov 5 23:24:15.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:24:15.824: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:24:15.824: INFO: stdout: "" Nov 5 23:24:16.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:24:17.330: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:24:17.330: INFO: stdout: "" Nov 5 23:24:17.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:24:17.889: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:24:17.889: INFO: stdout: "" Nov 5 23:24:18.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:24:18.898: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:24:18.898: INFO: stdout: "" Nov 5 23:24:19.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:24:19.896: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:24:19.896: INFO: stdout: "" Nov 5 23:24:20.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:24:20.936: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:24:20.936: INFO: stdout: "" Nov 5 23:24:21.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:24:21.861: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:24:21.861: INFO: stdout: "" Nov 5 23:24:22.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:24:22.965: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:24:22.965: INFO: stdout: "" Nov 5 23:24:23.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:24:24.329: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:24:24.329: INFO: stdout: "" Nov 5 23:24:24.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:24:24.946: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:24:24.946: INFO: stdout: "" Nov 5 23:24:25.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:24:26.016: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:24:26.016: INFO: stdout: "" Nov 5 23:24:26.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:24:26.825: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:24:26.825: INFO: stdout: "" Nov 5 23:24:27.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:24:28.033: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:24:28.033: INFO: stdout: "" Nov 5 23:24:28.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:24:29.013: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:24:29.014: INFO: stdout: "" Nov 5 23:24:29.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:24:30.324: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:24:30.324: INFO: stdout: "" Nov 5 23:24:30.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:24:30.943: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:24:30.943: INFO: stdout: "" Nov 5 23:24:31.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:24:31.899: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:24:31.899: INFO: stdout: "" Nov 5 23:24:32.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:24:32.841: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:24:32.841: INFO: stdout: "" Nov 5 23:24:33.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:24:33.831: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:24:33.832: INFO: stdout: "" Nov 5 23:24:34.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:24:35.195: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:24:35.195: INFO: stdout: "" Nov 5 23:24:35.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:24:36.266: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:24:36.267: INFO: stdout: "" Nov 5 23:24:36.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:24:36.815: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:24:36.815: INFO: stdout: "" Nov 5 23:24:37.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:24:38.561: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:24:38.562: INFO: stdout: "" Nov 5 23:24:38.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:24:38.826: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:24:38.826: INFO: stdout: "" Nov 5 23:24:39.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:24:39.822: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:24:39.822: INFO: stdout: "" Nov 5 23:24:40.587: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:24:40.978: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:24:40.978: INFO: stdout: "" Nov 5 23:24:41.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:24:41.848: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:24:41.848: INFO: stdout: "" Nov 5 23:24:42.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:24:42.847: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:24:42.847: INFO: stdout: "" Nov 5 23:24:43.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:24:43.902: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:24:43.902: INFO: stdout: "" Nov 5 23:24:44.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:24:44.803: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:24:44.803: INFO: stdout: "" Nov 5 23:24:45.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:24:45.836: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:24:45.836: INFO: stdout: "" Nov 5 23:24:46.587: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:24:46.819: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:24:46.820: INFO: stdout: "" Nov 5 23:24:47.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:24:47.829: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:24:47.829: INFO: stdout: "" Nov 5 23:24:48.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:24:48.818: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:24:48.818: INFO: stdout: "" Nov 5 23:24:49.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:24:49.843: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:24:49.843: INFO: stdout: "" Nov 5 23:24:50.587: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:24:50.837: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:24:50.837: INFO: stdout: "" Nov 5 23:24:51.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:24:51.849: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:24:51.849: INFO: stdout: "" Nov 5 23:24:52.587: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:24:52.951: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:24:52.951: INFO: stdout: "" Nov 5 23:24:53.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:24:53.852: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:24:53.852: INFO: stdout: "" Nov 5 23:24:54.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:24:54.812: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:24:54.812: INFO: stdout: "" Nov 5 23:24:55.587: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:24:55.836: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:24:55.836: INFO: stdout: "" Nov 5 23:24:56.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:24:56.847: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:24:56.847: INFO: stdout: "" Nov 5 23:24:57.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:24:57.851: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:24:57.851: INFO: stdout: "" Nov 5 23:24:58.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:24:58.834: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:24:58.834: INFO: stdout: "" Nov 5 23:24:59.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:24:59.829: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:24:59.829: INFO: stdout: "" Nov 5 23:25:00.587: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:25:00.841: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:25:00.841: INFO: stdout: "" Nov 5 23:25:01.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:25:01.968: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:25:01.968: INFO: stdout: "" Nov 5 23:25:02.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:25:03.097: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:25:03.097: INFO: stdout: "" Nov 5 23:25:03.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:25:03.877: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:25:03.878: INFO: stdout: "" Nov 5 23:25:04.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:25:04.812: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:25:04.812: INFO: stdout: "" Nov 5 23:25:05.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:25:05.828: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:25:05.828: INFO: stdout: "" Nov 5 23:25:06.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:25:06.874: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:25:06.874: INFO: stdout: "" Nov 5 23:25:07.587: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:25:07.845: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:25:07.845: INFO: stdout: "" Nov 5 23:25:08.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:25:09.378: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:25:09.378: INFO: stdout: "" Nov 5 23:25:09.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:25:09.890: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:25:09.890: INFO: stdout: "" Nov 5 23:25:10.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:25:10.951: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:25:10.951: INFO: stdout: "" Nov 5 23:25:11.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:25:12.027: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:25:12.027: INFO: stdout: "" Nov 5 23:25:12.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:25:12.983: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:25:12.983: INFO: stdout: "" Nov 5 23:25:13.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:25:13.827: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:25:13.828: INFO: stdout: "" Nov 5 23:25:14.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:25:14.824: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:25:14.824: INFO: stdout: "" Nov 5 23:25:15.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:25:15.828: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:25:15.828: INFO: stdout: "" Nov 5 23:25:16.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:25:16.826: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:25:16.826: INFO: stdout: "" Nov 5 23:25:17.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:25:17.819: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:25:17.819: INFO: stdout: "" Nov 5 23:25:18.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:25:18.837: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:25:18.837: INFO: stdout: "" Nov 5 23:25:19.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:25:19.829: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:25:19.829: INFO: stdout: "" Nov 5 23:25:20.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:25:20.830: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:25:20.830: INFO: stdout: "" Nov 5 23:25:21.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:25:21.911: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:25:21.911: INFO: stdout: "" Nov 5 23:25:22.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:25:23.170: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:25:23.170: INFO: stdout: "" Nov 5 23:25:23.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:25:23.836: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:25:23.836: INFO: stdout: "" Nov 5 23:25:24.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:25:24.987: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:25:24.987: INFO: stdout: "" Nov 5 23:25:25.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:25:25.832: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:25:25.832: INFO: stdout: "" Nov 5 23:25:26.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:25:26.867: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:25:26.867: INFO: stdout: "" Nov 5 23:25:27.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:25:27.825: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:25:27.825: INFO: stdout: "" Nov 5 23:25:28.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:25:28.849: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:25:28.849: INFO: stdout: "" Nov 5 23:25:29.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:25:29.826: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:25:29.826: INFO: stdout: "" Nov 5 23:25:30.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:25:31.174: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:25:31.174: INFO: stdout: "" Nov 5 23:25:31.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:25:31.850: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:25:31.850: INFO: stdout: "" Nov 5 23:25:32.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:25:32.852: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:25:32.852: INFO: stdout: "" Nov 5 23:25:33.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:25:33.838: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:25:33.838: INFO: stdout: "" Nov 5 23:25:34.587: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:25:34.851: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:25:34.851: INFO: stdout: "" Nov 5 23:25:35.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:25:35.829: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:25:35.829: INFO: stdout: "" Nov 5 23:25:36.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:25:36.837: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:25:36.837: INFO: stdout: "" Nov 5 23:25:37.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:25:38.900: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:25:38.900: INFO: stdout: "" Nov 5 23:25:39.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:25:39.849: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:25:39.849: INFO: stdout: "" Nov 5 23:25:40.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:25:41.075: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:25:41.075: INFO: stdout: "" Nov 5 23:25:41.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:25:41.869: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:25:41.869: INFO: stdout: "" Nov 5 23:25:42.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:25:42.920: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:25:42.920: INFO: stdout: "" Nov 5 23:25:43.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:25:43.864: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:25:43.864: INFO: stdout: "" Nov 5 23:25:44.587: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:25:45.093: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:25:45.093: INFO: stdout: "" Nov 5 23:25:45.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:25:45.844: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:25:45.844: INFO: stdout: "" Nov 5 23:25:46.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:25:46.864: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:25:46.864: INFO: stdout: "" Nov 5 23:25:47.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:25:47.836: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:25:47.836: INFO: stdout: "" Nov 5 23:25:48.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:25:48.838: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:25:48.838: INFO: stdout: "" Nov 5 23:25:49.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:25:49.821: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:25:49.821: INFO: stdout: "" Nov 5 23:25:50.587: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:25:50.826: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:25:50.826: INFO: stdout: "" Nov 5 23:25:51.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:25:51.847: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:25:51.847: INFO: stdout: "" Nov 5 23:25:52.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:25:52.842: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:25:52.842: INFO: stdout: "" Nov 5 23:25:53.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:25:53.899: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:25:53.899: INFO: stdout: "" Nov 5 23:25:54.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:25:54.822: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:25:54.822: INFO: stdout: "" Nov 5 23:25:55.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:25:55.969: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:25:55.969: INFO: stdout: "" Nov 5 23:25:56.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:25:56.904: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:25:56.904: INFO: stdout: "" Nov 5 23:25:56.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2332 exec execpod9nkst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:25:57.286: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:25:57.286: INFO: stdout: "" Nov 5 23:25:57.287: FAIL: Unexpected error: <*errors.errorString | 0xc0006a0610>: { s: "service is not reachable within 2m0s timeout on endpoint externalname-service:80 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint externalname-service:80 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func24.15() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1351 +0x358 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001b80a80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc001b80a80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc001b80a80, 0x70e7b58) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 Nov 5 23:25:57.288: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-2332". STEP: Found 17 events. Nov 5 23:25:57.304: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpod9nkst: { } Scheduled: Successfully assigned services-2332/execpod9nkst to node1 Nov 5 23:25:57.304: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for externalname-service-7kx6r: { } Scheduled: Successfully assigned services-2332/externalname-service-7kx6r to node1 Nov 5 23:25:57.304: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for externalname-service-m64cs: { } Scheduled: Successfully assigned services-2332/externalname-service-m64cs to node1 Nov 5 23:25:57.304: INFO: At 2021-11-05 23:23:44 +0000 UTC - event for externalname-service: {replication-controller } SuccessfulCreate: Created pod: externalname-service-m64cs Nov 5 23:25:57.304: INFO: At 2021-11-05 23:23:44 +0000 UTC - event for externalname-service: {replication-controller } SuccessfulCreate: Created pod: externalname-service-7kx6r Nov 5 23:25:57.304: INFO: At 2021-11-05 23:23:46 +0000 UTC - event for externalname-service-7kx6r: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Nov 5 23:25:57.304: INFO: At 2021-11-05 23:23:46 +0000 UTC - event for externalname-service-7kx6r: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 281.131505ms Nov 5 23:25:57.304: INFO: At 2021-11-05 23:23:47 +0000 UTC - event for externalname-service-7kx6r: {kubelet node1} Started: Started container externalname-service Nov 5 23:25:57.304: INFO: At 2021-11-05 23:23:47 +0000 UTC - event for externalname-service-7kx6r: {kubelet node1} Created: Created container externalname-service Nov 5 23:25:57.304: INFO: At 2021-11-05 23:23:47 +0000 UTC - event for externalname-service-m64cs: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Nov 5 23:25:57.304: INFO: At 2021-11-05 23:23:47 +0000 UTC - event for externalname-service-m64cs: {kubelet node1} Started: Started container externalname-service Nov 5 23:25:57.304: INFO: At 2021-11-05 23:23:47 +0000 UTC - event for externalname-service-m64cs: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 335.890182ms Nov 5 23:25:57.304: INFO: At 2021-11-05 23:23:47 +0000 UTC - event for externalname-service-m64cs: {kubelet node1} Created: Created container externalname-service Nov 5 23:25:57.304: INFO: At 2021-11-05 23:23:52 +0000 UTC - event for execpod9nkst: {kubelet node1} Started: Started container agnhost-container Nov 5 23:25:57.304: INFO: At 2021-11-05 23:23:52 +0000 UTC - event for execpod9nkst: {kubelet node1} Created: Created container agnhost-container Nov 5 23:25:57.304: INFO: At 2021-11-05 23:23:52 +0000 UTC - event for execpod9nkst: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Nov 5 23:25:57.304: INFO: At 2021-11-05 23:23:52 +0000 UTC - event for execpod9nkst: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 261.130852ms Nov 5 23:25:57.307: INFO: POD NODE PHASE GRACE CONDITIONS Nov 5 23:25:57.307: INFO: execpod9nkst node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:23:50 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:23:53 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:23:53 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:23:50 +0000 UTC }] Nov 5 23:25:57.307: INFO: externalname-service-7kx6r node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:23:44 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:23:47 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:23:47 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:23:44 +0000 UTC }] Nov 5 23:25:57.307: INFO: externalname-service-m64cs node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:23:44 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:23:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:23:48 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:23:44 +0000 UTC }] Nov 5 23:25:57.307: INFO: Nov 5 23:25:57.312: INFO: Logging node info for node master1 Nov 5 23:25:57.314: INFO: Node Info: &Node{ObjectMeta:{master1 acabf68f-e6fa-4376-87a7-953399a106b3 43326 0 2021-11-05 20:58:52 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-11-05 20:58:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-11-05 21:01:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-11-05 21:01:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-11-05 21:06:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-05 21:04:29 +0000 UTC,LastTransitionTime:2021-11-05 21:04:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-05 23:25:48 +0000 UTC,LastTransitionTime:2021-11-05 20:58:50 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-05 23:25:48 +0000 UTC,LastTransitionTime:2021-11-05 20:58:50 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-05 23:25:48 +0000 UTC,LastTransitionTime:2021-11-05 20:58:50 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-05 23:25:48 +0000 UTC,LastTransitionTime:2021-11-05 21:01:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b66bbe4d404942179ce344aa1da0c494,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:b59c0f0e-9c14-460c-acfa-6e83037bd04e,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724468800,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:50c0d65dc6b2d7618e01dd2fb431c2312ad7490d0461d040b112b04809b07401 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:2012644549052fa07c43b0d19f320c871a25e105d0b23e33645e4f1bcf8fcd97 nginx:1.20.1-alpine],SizeBytes:22650454,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 5 23:25:57.314: INFO: Logging kubelet events for node master1 Nov 5 23:25:57.317: INFO: Logging pods the kubelet thinks is on node master1 Nov 5 23:25:57.344: INFO: kube-proxy-r4cf7 started at 2021-11-05 21:00:42 +0000 UTC (0+1 container statuses recorded) Nov 5 23:25:57.344: INFO: Container kube-proxy ready: true, restart count 1 Nov 5 23:25:57.344: INFO: kube-multus-ds-amd64-rr699 started at 2021-11-05 21:01:44 +0000 UTC (0+1 container statuses recorded) Nov 5 23:25:57.344: INFO: Container kube-multus ready: true, restart count 1 Nov 5 23:25:57.344: INFO: container-registry-65d7c44b96-dwrs5 started at 2021-11-05 21:06:01 +0000 UTC (0+2 container statuses recorded) Nov 5 23:25:57.344: INFO: Container docker-registry ready: true, restart count 0 Nov 5 23:25:57.344: INFO: Container nginx ready: true, restart count 0 Nov 5 23:25:57.344: INFO: kube-apiserver-master1 started at 2021-11-05 21:00:02 +0000 UTC (0+1 container statuses recorded) Nov 5 23:25:57.344: INFO: Container kube-apiserver ready: true, restart count 0 Nov 5 23:25:57.344: INFO: kube-controller-manager-master1 started at 2021-11-05 21:00:02 +0000 UTC (0+1 container statuses recorded) Nov 5 23:25:57.344: INFO: Container kube-controller-manager ready: true, restart count 3 Nov 5 23:25:57.344: INFO: kube-scheduler-master1 started at 2021-11-05 21:00:02 +0000 UTC (0+1 container statuses recorded) Nov 5 23:25:57.344: INFO: Container kube-scheduler ready: true, restart count 0 Nov 5 23:25:57.344: INFO: kube-flannel-hkkhj started at 2021-11-05 21:01:36 +0000 UTC (1+1 container statuses recorded) Nov 5 23:25:57.344: INFO: Init container install-cni ready: true, restart count 2 Nov 5 23:25:57.344: INFO: Container kube-flannel ready: true, restart count 2 Nov 5 23:25:57.344: INFO: coredns-8474476ff8-nq2jw started at 2021-11-05 21:02:14 +0000 UTC (0+1 container statuses recorded) Nov 5 23:25:57.344: INFO: Container coredns ready: true, restart count 2 Nov 5 23:25:57.344: INFO: node-exporter-lgdzv started at 2021-11-05 21:14:48 +0000 UTC (0+2 container statuses recorded) Nov 5 23:25:57.344: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 5 23:25:57.344: INFO: Container node-exporter ready: true, restart count 0 W1105 23:25:57.359784 32 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 5 23:25:57.429: INFO: Latency metrics for node master1 Nov 5 23:25:57.429: INFO: Logging node info for node master2 Nov 5 23:25:57.431: INFO: Node Info: &Node{ObjectMeta:{master2 004d4571-8588-4d18-93d0-ad0af4174866 43349 0 2021-11-05 20:59:23 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-11-05 20:59:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2021-11-05 21:00:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2021-11-05 21:01:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-11-05 21:09:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2021-11-05 21:09:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-05 21:04:41 +0000 UTC,LastTransitionTime:2021-11-05 21:04:41 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-05 23:25:50 +0000 UTC,LastTransitionTime:2021-11-05 20:59:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-05 23:25:50 +0000 UTC,LastTransitionTime:2021-11-05 20:59:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-05 23:25:50 +0000 UTC,LastTransitionTime:2021-11-05 20:59:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-05 23:25:50 +0000 UTC,LastTransitionTime:2021-11-05 21:01:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:0f1bc4b4acc1463992265eb9f006d2f4,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:d0e797a3-7d35-4e63-b584-b18006ef67fe,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724468800,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 5 23:25:57.432: INFO: Logging kubelet events for node master2 Nov 5 23:25:57.435: INFO: Logging pods the kubelet thinks is on node master2 Nov 5 23:25:57.448: INFO: kube-apiserver-master2 started at 2021-11-05 21:00:02 +0000 UTC (0+1 container statuses recorded) Nov 5 23:25:57.448: INFO: Container kube-apiserver ready: true, restart count 0 Nov 5 23:25:57.448: INFO: kube-scheduler-master2 started at 2021-11-05 21:08:18 +0000 UTC (0+1 container statuses recorded) Nov 5 23:25:57.448: INFO: Container kube-scheduler ready: true, restart count 3 Nov 5 23:25:57.448: INFO: kube-multus-ds-amd64-m5646 started at 2021-11-05 21:01:44 +0000 UTC (0+1 container statuses recorded) Nov 5 23:25:57.448: INFO: Container kube-multus ready: true, restart count 1 Nov 5 23:25:57.448: INFO: node-feature-discovery-controller-cff799f9f-8cg9j started at 2021-11-05 21:09:34 +0000 UTC (0+1 container statuses recorded) Nov 5 23:25:57.448: INFO: Container nfd-controller ready: true, restart count 0 Nov 5 23:25:57.448: INFO: node-exporter-8mxjv started at 2021-11-05 21:14:48 +0000 UTC (0+2 container statuses recorded) Nov 5 23:25:57.448: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 5 23:25:57.448: INFO: Container node-exporter ready: true, restart count 0 Nov 5 23:25:57.448: INFO: kube-controller-manager-master2 started at 2021-11-05 21:04:18 +0000 UTC (0+1 container statuses recorded) Nov 5 23:25:57.448: INFO: Container kube-controller-manager ready: true, restart count 2 Nov 5 23:25:57.448: INFO: kube-proxy-9vm9v started at 2021-11-05 21:00:42 +0000 UTC (0+1 container statuses recorded) Nov 5 23:25:57.448: INFO: Container kube-proxy ready: true, restart count 1 Nov 5 23:25:57.448: INFO: kube-flannel-g7q4k started at 2021-11-05 21:01:36 +0000 UTC (1+1 container statuses recorded) Nov 5 23:25:57.448: INFO: Init container install-cni ready: true, restart count 0 Nov 5 23:25:57.448: INFO: Container kube-flannel ready: true, restart count 3 W1105 23:25:57.462383 32 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 5 23:25:57.523: INFO: Latency metrics for node master2 Nov 5 23:25:57.523: INFO: Logging node info for node master3 Nov 5 23:25:57.525: INFO: Node Info: &Node{ObjectMeta:{master3 d3395dfc-1d8f-4527-88b4-7f472f6a6c0f 43427 0 2021-11-05 20:59:34 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-11-05 20:59:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-11-05 21:01:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-11-05 21:01:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-11-05 21:12:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-05 21:04:26 +0000 UTC,LastTransitionTime:2021-11-05 21:04:26 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-05 23:25:55 +0000 UTC,LastTransitionTime:2021-11-05 20:59:34 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-05 23:25:55 +0000 UTC,LastTransitionTime:2021-11-05 20:59:34 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-05 23:25:55 +0000 UTC,LastTransitionTime:2021-11-05 20:59:34 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-05 23:25:55 +0000 UTC,LastTransitionTime:2021-11-05 21:04:19 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:006015d4e2a7441aa293fbb9db938e38,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:a0f65291-184f-4994-a7ea-d1a5b4d71ffa,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724468800,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 5 23:25:57.526: INFO: Logging kubelet events for node master3 Nov 5 23:25:57.528: INFO: Logging pods the kubelet thinks is on node master3 Nov 5 23:25:57.541: INFO: kube-scheduler-master3 started at 2021-11-05 21:08:19 +0000 UTC (0+1 container statuses recorded) Nov 5 23:25:57.541: INFO: Container kube-scheduler ready: true, restart count 3 Nov 5 23:25:57.541: INFO: kube-proxy-s2pzt started at 2021-11-05 21:00:42 +0000 UTC (0+1 container statuses recorded) Nov 5 23:25:57.541: INFO: Container kube-proxy ready: true, restart count 2 Nov 5 23:25:57.541: INFO: kube-multus-ds-amd64-cp25f started at 2021-11-05 21:01:44 +0000 UTC (0+1 container statuses recorded) Nov 5 23:25:57.541: INFO: Container kube-multus ready: true, restart count 1 Nov 5 23:25:57.541: INFO: dns-autoscaler-7df78bfcfb-z9dxm started at 2021-11-05 21:02:12 +0000 UTC (0+1 container statuses recorded) Nov 5 23:25:57.541: INFO: Container autoscaler ready: true, restart count 1 Nov 5 23:25:57.541: INFO: kube-apiserver-master3 started at 2021-11-05 21:04:19 +0000 UTC (0+1 container statuses recorded) Nov 5 23:25:57.541: INFO: Container kube-apiserver ready: true, restart count 0 Nov 5 23:25:57.541: INFO: kube-controller-manager-master3 started at 2021-11-05 21:00:02 +0000 UTC (0+1 container statuses recorded) Nov 5 23:25:57.541: INFO: Container kube-controller-manager ready: true, restart count 2 Nov 5 23:25:57.541: INFO: kube-flannel-f55xz started at 2021-11-05 21:01:36 +0000 UTC (1+1 container statuses recorded) Nov 5 23:25:57.542: INFO: Init container install-cni ready: true, restart count 0 Nov 5 23:25:57.542: INFO: Container kube-flannel ready: true, restart count 1 Nov 5 23:25:57.542: INFO: coredns-8474476ff8-qbn9j started at 2021-11-05 21:02:10 +0000 UTC (0+1 container statuses recorded) Nov 5 23:25:57.542: INFO: Container coredns ready: true, restart count 1 Nov 5 23:25:57.542: INFO: node-exporter-mqcvx started at 2021-11-05 21:14:48 +0000 UTC (0+2 container statuses recorded) Nov 5 23:25:57.542: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 5 23:25:57.542: INFO: Container node-exporter ready: true, restart count 0 W1105 23:25:57.557362 32 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 5 23:25:57.627: INFO: Latency metrics for node master3 Nov 5 23:25:57.627: INFO: Logging node info for node node1 Nov 5 23:25:57.629: INFO: Node Info: &Node{ObjectMeta:{node1 290b18e7-da33-4da8-b78a-8a7f28c49abf 43402 0 2021-11-05 21:00:39 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-11-05 21:00:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-11-05 21:00:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-11-05 21:01:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-11-05 21:09:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-11-05 21:13:07 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-11-05 21:13:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-05 21:04:40 +0000 UTC,LastTransitionTime:2021-11-05 21:04:40 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-05 23:25:53 +0000 UTC,LastTransitionTime:2021-11-05 21:00:39 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-05 23:25:53 +0000 UTC,LastTransitionTime:2021-11-05 21:00:39 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-05 23:25:53 +0000 UTC,LastTransitionTime:2021-11-05 21:00:39 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-05 23:25:53 +0000 UTC,LastTransitionTime:2021-11-05 21:01:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7f2fc144f1734ec29780a435d0602675,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:7c24c54c-15ba-4c20-b196-32ad0c82be71,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003432896,},ContainerImage{Names:[localhost:30500/cmk@sha256:8c85a99f0d621f4580064e48909402b99a2e98aff1c1e4164e4e52d46568c5be cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724468800,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:e30d246f33998514fb41fc34b11174e739efb54af715b53723b81cc5d2ebd29d nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[localhost:30500/tasextender@sha256:50c0d65dc6b2d7618e01dd2fb431c2312ad7490d0461d040b112b04809b07401 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:a296b4c6f6ee2b88f095b61e95c7ef4f51ba25598835b4978c9256d8c8ace48a alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 5 23:25:57.630: INFO: Logging kubelet events for node node1 Nov 5 23:25:57.633: INFO: Logging pods the kubelet thinks is on node node1 Nov 5 23:25:57.673: INFO: cmk-cfm9r started at 2021-11-05 21:13:47 +0000 UTC (0+2 container statuses recorded) Nov 5 23:25:57.673: INFO: Container nodereport ready: true, restart count 0 Nov 5 23:25:57.673: INFO: Container reconcile ready: true, restart count 0 Nov 5 23:25:57.673: INFO: prometheus-k8s-0 started at 2021-11-05 21:14:58 +0000 UTC (0+4 container statuses recorded) Nov 5 23:25:57.673: INFO: Container config-reloader ready: true, restart count 0 Nov 5 23:25:57.673: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Nov 5 23:25:57.673: INFO: Container grafana ready: true, restart count 0 Nov 5 23:25:57.673: INFO: Container prometheus ready: true, restart count 1 Nov 5 23:25:57.673: INFO: liveness-76ba662b-a8fe-46bb-bcf2-af2435f2ea84 started at 2021-11-05 23:24:16 +0000 UTC (0+1 container statuses recorded) Nov 5 23:25:57.673: INFO: Container agnhost-container ready: true, restart count 4 Nov 5 23:25:57.673: INFO: netserver-0 started at 2021-11-05 23:25:45 +0000 UTC (0+1 container statuses recorded) Nov 5 23:25:57.673: INFO: Container webserver ready: false, restart count 0 Nov 5 23:25:57.673: INFO: kube-proxy-mc4cs started at 2021-11-05 21:00:42 +0000 UTC (0+1 container statuses recorded) Nov 5 23:25:57.673: INFO: Container kube-proxy ready: true, restart count 2 Nov 5 23:25:57.673: INFO: update-demo-nautilus-5vf7r started at 2021-11-05 23:25:38 +0000 UTC (0+1 container statuses recorded) Nov 5 23:25:57.673: INFO: Container update-demo ready: false, restart count 0 Nov 5 23:25:57.673: INFO: pod-0e9e9b29-a961-4078-8f37-d7cbeda92b29 started at 2021-11-05 23:25:56 +0000 UTC (0+1 container statuses recorded) Nov 5 23:25:57.673: INFO: Container test-container ready: false, restart count 0 Nov 5 23:25:57.673: INFO: test-pod started at 2021-11-05 23:23:17 +0000 UTC (0+1 container statuses recorded) Nov 5 23:25:57.673: INFO: Container webserver ready: true, restart count 0 Nov 5 23:25:57.673: INFO: kube-flannel-hxwks started at 2021-11-05 21:01:36 +0000 UTC (1+1 container statuses recorded) Nov 5 23:25:57.673: INFO: Init container install-cni ready: true, restart count 2 Nov 5 23:25:57.673: INFO: Container kube-flannel ready: true, restart count 3 Nov 5 23:25:57.673: INFO: kube-multus-ds-amd64-mqrl8 started at 2021-11-05 21:01:44 +0000 UTC (0+1 container statuses recorded) Nov 5 23:25:57.673: INFO: Container kube-multus ready: true, restart count 1 Nov 5 23:25:57.673: INFO: kubernetes-dashboard-785dcbb76d-9wtdz started at 2021-11-05 21:02:14 +0000 UTC (0+1 container statuses recorded) Nov 5 23:25:57.673: INFO: Container kubernetes-dashboard ready: true, restart count 1 Nov 5 23:25:57.673: INFO: cmk-init-discover-node1-nnkks started at 2021-11-05 21:13:04 +0000 UTC (0+3 container statuses recorded) Nov 5 23:25:57.673: INFO: Container discover ready: false, restart count 0 Nov 5 23:25:57.673: INFO: Container init ready: false, restart count 0 Nov 5 23:25:57.673: INFO: Container install ready: false, restart count 0 Nov 5 23:25:57.673: INFO: cmk-webhook-6c9d5f8578-wq5mk started at 2021-11-05 21:13:47 +0000 UTC (0+1 container statuses recorded) Nov 5 23:25:57.673: INFO: Container cmk-webhook ready: true, restart count 0 Nov 5 23:25:57.673: INFO: node-exporter-fvksz started at 2021-11-05 21:14:48 +0000 UTC (0+2 container statuses recorded) Nov 5 23:25:57.673: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 5 23:25:57.673: INFO: Container node-exporter ready: true, restart count 0 Nov 5 23:25:57.673: INFO: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s started at 2021-11-05 21:17:51 +0000 UTC (0+1 container statuses recorded) Nov 5 23:25:57.673: INFO: Container tas-extender ready: true, restart count 0 Nov 5 23:25:57.673: INFO: collectd-5k6s9 started at 2021-11-05 21:18:40 +0000 UTC (0+3 container statuses recorded) Nov 5 23:25:57.673: INFO: Container collectd ready: true, restart count 0 Nov 5 23:25:57.673: INFO: Container collectd-exporter ready: true, restart count 0 Nov 5 23:25:57.673: INFO: Container rbac-proxy ready: true, restart count 0 Nov 5 23:25:57.673: INFO: nginx-proxy-node1 started at 2021-11-05 21:00:39 +0000 UTC (0+1 container statuses recorded) Nov 5 23:25:57.673: INFO: Container nginx-proxy ready: true, restart count 2 Nov 5 23:25:57.673: INFO: externalname-service-7kx6r started at 2021-11-05 23:23:44 +0000 UTC (0+1 container statuses recorded) Nov 5 23:25:57.673: INFO: Container externalname-service ready: true, restart count 0 Nov 5 23:25:57.673: INFO: execpod9nkst started at 2021-11-05 23:23:50 +0000 UTC (0+1 container statuses recorded) Nov 5 23:25:57.673: INFO: Container agnhost-container ready: true, restart count 0 Nov 5 23:25:57.673: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-l4npn started at 2021-11-05 21:10:45 +0000 UTC (0+1 container statuses recorded) Nov 5 23:25:57.673: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 5 23:25:57.673: INFO: externalname-service-m64cs started at 2021-11-05 23:23:44 +0000 UTC (0+1 container statuses recorded) Nov 5 23:25:57.673: INFO: Container externalname-service ready: true, restart count 0 Nov 5 23:25:57.673: INFO: node-feature-discovery-worker-spmbf started at 2021-11-05 21:09:34 +0000 UTC (0+1 container statuses recorded) Nov 5 23:25:57.673: INFO: Container nfd-worker ready: true, restart count 0 W1105 23:25:57.688342 32 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 5 23:25:58.263: INFO: Latency metrics for node node1 Nov 5 23:25:58.263: INFO: Logging node info for node node2 Nov 5 23:25:58.276: INFO: Node Info: &Node{ObjectMeta:{node2 7d7e71f0-82d7-49ba-b69a-56600dd59b3f 43385 0 2021-11-05 21:00:39 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-11-05 21:00:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-11-05 21:00:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-11-05 21:01:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-11-05 21:09:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-11-05 21:13:29 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-11-05 21:13:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-05 21:04:43 +0000 UTC,LastTransitionTime:2021-11-05 21:04:43 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-05 23:25:52 +0000 UTC,LastTransitionTime:2021-11-05 21:00:39 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-05 23:25:52 +0000 UTC,LastTransitionTime:2021-11-05 21:00:39 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-05 23:25:52 +0000 UTC,LastTransitionTime:2021-11-05 21:00:39 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-05 23:25:52 +0000 UTC,LastTransitionTime:2021-11-05 21:01:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:415d65c0f8484c488059b324e675b5bd,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:c5482a76-3a9a-45bb-ab12-c74550bfe71f,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:724468800,},ContainerImage{Names:[localhost:30500/cmk@sha256:8c85a99f0d621f4580064e48909402b99a2e98aff1c1e4164e4e52d46568c5be localhost:30500/cmk:v1.5.1],SizeBytes:724468800,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:e30d246f33998514fb41fc34b11174e739efb54af715b53723b81cc5d2ebd29d localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 5 23:25:58.277: INFO: Logging kubelet events for node node2 Nov 5 23:25:58.280: INFO: Logging pods the kubelet thinks is on node node2 Nov 5 23:25:58.294: INFO: cmk-bnvd2 started at 2021-11-05 21:13:46 +0000 UTC (0+2 container statuses recorded) Nov 5 23:25:58.294: INFO: Container nodereport ready: true, restart count 0 Nov 5 23:25:58.294: INFO: Container reconcile ready: true, restart count 0 Nov 5 23:25:58.294: INFO: prometheus-operator-585ccfb458-vh55q started at 2021-11-05 21:14:41 +0000 UTC (0+2 container statuses recorded) Nov 5 23:25:58.294: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 5 23:25:58.294: INFO: Container prometheus-operator ready: true, restart count 0 Nov 5 23:25:58.294: INFO: send-events-437a9b26-3781-44fc-afc4-5d756cc2afb6 started at 2021-11-05 23:25:33 +0000 UTC (0+1 container statuses recorded) Nov 5 23:25:58.294: INFO: Container p ready: true, restart count 0 Nov 5 23:25:58.294: INFO: nginx-proxy-node2 started at 2021-11-05 21:00:39 +0000 UTC (0+1 container statuses recorded) Nov 5 23:25:58.294: INFO: Container nginx-proxy ready: true, restart count 2 Nov 5 23:25:58.294: INFO: node-feature-discovery-worker-pn6cr started at 2021-11-05 21:09:34 +0000 UTC (0+1 container statuses recorded) Nov 5 23:25:58.294: INFO: Container nfd-worker ready: true, restart count 0 Nov 5 23:25:58.294: INFO: ss2-2 started at 2021-11-05 23:25:28 +0000 UTC (0+1 container statuses recorded) Nov 5 23:25:58.294: INFO: Container webserver ready: true, restart count 0 Nov 5 23:25:58.294: INFO: cmk-init-discover-node2-9svdd started at 2021-11-05 21:13:24 +0000 UTC (0+3 container statuses recorded) Nov 5 23:25:58.294: INFO: Container discover ready: false, restart count 0 Nov 5 23:25:58.294: INFO: Container init ready: false, restart count 0 Nov 5 23:25:58.294: INFO: Container install ready: false, restart count 0 Nov 5 23:25:58.294: INFO: node-exporter-k7p79 started at 2021-11-05 21:14:48 +0000 UTC (0+2 container statuses recorded) Nov 5 23:25:58.294: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 5 23:25:58.294: INFO: Container node-exporter ready: true, restart count 0 Nov 5 23:25:58.294: INFO: kubernetes-metrics-scraper-5558854cb-v9vgg started at 2021-11-05 21:02:14 +0000 UTC (0+1 container statuses recorded) Nov 5 23:25:58.294: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Nov 5 23:25:58.294: INFO: ss2-1 started at 2021-11-05 23:25:37 +0000 UTC (0+1 container statuses recorded) Nov 5 23:25:58.294: INFO: Container webserver ready: true, restart count 0 Nov 5 23:25:58.294: INFO: kube-flannel-cqj7j started at 2021-11-05 21:01:36 +0000 UTC (1+1 container statuses recorded) Nov 5 23:25:58.294: INFO: Init container install-cni ready: true, restart count 1 Nov 5 23:25:58.294: INFO: Container kube-flannel ready: true, restart count 2 Nov 5 23:25:58.294: INFO: ss2-0 started at 2021-11-05 23:25:48 +0000 UTC (0+1 container statuses recorded) Nov 5 23:25:58.294: INFO: Container webserver ready: true, restart count 0 Nov 5 23:25:58.294: INFO: kube-proxy-j9lmg started at 2021-11-05 21:00:42 +0000 UTC (0+1 container statuses recorded) Nov 5 23:25:58.294: INFO: Container kube-proxy ready: true, restart count 2 Nov 5 23:25:58.294: INFO: forbid-27269242-n79sc started at 2021-11-05 23:22:00 +0000 UTC (0+1 container statuses recorded) Nov 5 23:25:58.294: INFO: Container c ready: true, restart count 0 Nov 5 23:25:58.294: INFO: update-demo-nautilus-2lx9z started at 2021-11-05 23:25:38 +0000 UTC (0+1 container statuses recorded) Nov 5 23:25:58.294: INFO: Container update-demo ready: false, restart count 0 Nov 5 23:25:58.294: INFO: e2e-test-httpd-pod started at 2021-11-05 23:25:55 +0000 UTC (0+1 container statuses recorded) Nov 5 23:25:58.294: INFO: Container e2e-test-httpd-pod ready: false, restart count 0 Nov 5 23:25:58.294: INFO: netserver-1 started at 2021-11-05 23:25:45 +0000 UTC (0+1 container statuses recorded) Nov 5 23:25:58.294: INFO: Container webserver ready: false, restart count 0 Nov 5 23:25:58.294: INFO: kube-multus-ds-amd64-p7bxx started at 2021-11-05 21:01:44 +0000 UTC (0+1 container statuses recorded) Nov 5 23:25:58.294: INFO: Container kube-multus ready: true, restart count 1 Nov 5 23:25:58.294: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-tzh4p started at 2021-11-05 21:10:45 +0000 UTC (0+1 container statuses recorded) Nov 5 23:25:58.294: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 5 23:25:58.294: INFO: collectd-r2g57 started at 2021-11-05 21:18:40 +0000 UTC (0+3 container statuses recorded) Nov 5 23:25:58.294: INFO: Container collectd ready: true, restart count 0 Nov 5 23:25:58.294: INFO: Container collectd-exporter ready: true, restart count 0 Nov 5 23:25:58.294: INFO: Container rbac-proxy ready: true, restart count 0 W1105 23:25:58.315220 32 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 5 23:25:58.672: INFO: Latency metrics for node node2 Nov 5 23:25:58.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2332" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [134.008 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 5 23:25:57.287: Unexpected error: <*errors.errorString | 0xc0006a0610>: { s: "service is not reachable within 2m0s timeout on endpoint externalname-service:80 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint externalname-service:80 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1351 ------------------------------ {"msg":"FAILED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":-1,"completed":13,"skipped":319,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:25:56.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0777 on tmpfs Nov 5 23:25:56.383: INFO: Waiting up to 5m0s for pod "pod-0e9e9b29-a961-4078-8f37-d7cbeda92b29" in namespace "emptydir-6908" to be "Succeeded or Failed" Nov 5 23:25:56.385: INFO: Pod "pod-0e9e9b29-a961-4078-8f37-d7cbeda92b29": Phase="Pending", Reason="", readiness=false. Elapsed: 1.885606ms Nov 5 23:25:58.387: INFO: Pod "pod-0e9e9b29-a961-4078-8f37-d7cbeda92b29": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004586703s Nov 5 23:26:00.393: INFO: Pod "pod-0e9e9b29-a961-4078-8f37-d7cbeda92b29": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009713383s STEP: Saw pod success Nov 5 23:26:00.393: INFO: Pod "pod-0e9e9b29-a961-4078-8f37-d7cbeda92b29" satisfied condition "Succeeded or Failed" Nov 5 23:26:00.395: INFO: Trying to get logs from node node1 pod pod-0e9e9b29-a961-4078-8f37-d7cbeda92b29 container test-container: STEP: delete the pod Nov 5 23:26:00.407: INFO: Waiting for pod pod-0e9e9b29-a961-4078-8f37-d7cbeda92b29 to disappear Nov 5 23:26:00.410: INFO: Pod pod-0e9e9b29-a961-4078-8f37-d7cbeda92b29 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:26:00.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6908" for this suite. • ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:25:58.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 5 23:25:58.140: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Nov 5 23:26:00.163: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:26:01.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5770" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":-1,"completed":23,"skipped":286,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:25:58.781: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Nov 5 23:25:58.814: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3716e150-d5e7-4847-b94f-2f3496324f83" in namespace "projected-1323" to be "Succeeded or Failed" Nov 5 23:25:58.817: INFO: Pod "downwardapi-volume-3716e150-d5e7-4847-b94f-2f3496324f83": Phase="Pending", Reason="", readiness=false. Elapsed: 2.756919ms Nov 5 23:26:00.821: INFO: Pod "downwardapi-volume-3716e150-d5e7-4847-b94f-2f3496324f83": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006802288s Nov 5 23:26:02.825: INFO: Pod "downwardapi-volume-3716e150-d5e7-4847-b94f-2f3496324f83": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010801746s STEP: Saw pod success Nov 5 23:26:02.825: INFO: Pod "downwardapi-volume-3716e150-d5e7-4847-b94f-2f3496324f83" satisfied condition "Succeeded or Failed" Nov 5 23:26:02.828: INFO: Trying to get logs from node node2 pod downwardapi-volume-3716e150-d5e7-4847-b94f-2f3496324f83 container client-container: STEP: delete the pod Nov 5 23:26:02.838: INFO: Waiting for pod downwardapi-volume-3716e150-d5e7-4847-b94f-2f3496324f83 to disappear Nov 5 23:26:02.840: INFO: Pod downwardapi-volume-3716e150-d5e7-4847-b94f-2f3496324f83 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:26:02.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1323" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":376,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:25:55.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1548 [It] should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Nov 5 23:25:55.685: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9207 run e2e-test-httpd-pod --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 --labels=run=e2e-test-httpd-pod' Nov 5 23:25:55.854: INFO: stderr: "" Nov 5 23:25:55.854: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Nov 5 23:26:00.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9207 get pod e2e-test-httpd-pod -o json' Nov 5 23:26:01.083: INFO: stderr: "" Nov 5 23:26:01.083: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"annotations\": {\n \"k8s.v1.cni.cncf.io/network-status\": \"[{\\n \\\"name\\\": \\\"default-cni-network\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.244.4.39\\\"\\n ],\\n \\\"mac\\\": \\\"e2:f6:13:49:6f:1c\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\n \"k8s.v1.cni.cncf.io/networks-status\": \"[{\\n \\\"name\\\": \\\"default-cni-network\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.244.4.39\\\"\\n ],\\n \\\"mac\\\": \\\"e2:f6:13:49:6f:1c\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\n \"kubernetes.io/psp\": \"collectd\"\n },\n \"creationTimestamp\": \"2021-11-05T23:25:55Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-9207\",\n \"resourceVersion\": \"43542\",\n \"uid\": \"f1b5011c-71e7-4000-9fdc-96dd6e0ddd81\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\",\n \"imagePullPolicy\": \"Always\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"kube-api-access-4645c\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"node2\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"kube-api-access-4645c\",\n \"projected\": {\n \"defaultMode\": 420,\n \"sources\": [\n {\n \"serviceAccountToken\": {\n \"expirationSeconds\": 3607,\n \"path\": \"token\"\n }\n },\n {\n \"configMap\": {\n \"items\": [\n {\n \"key\": \"ca.crt\",\n \"path\": \"ca.crt\"\n }\n ],\n \"name\": \"kube-root-ca.crt\"\n }\n },\n {\n \"downwardAPI\": {\n \"items\": [\n {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"metadata.namespace\"\n },\n \"path\": \"namespace\"\n }\n ]\n }\n }\n ]\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-11-05T23:25:55Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-11-05T23:25:59Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-11-05T23:25:59Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-11-05T23:25:55Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"docker://3fc398ec4c0f903ba936c38a9fc28799931b4788a3d691073a9a3ab851c9b95e\",\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\",\n \"imageID\": \"docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2021-11-05T23:25:58Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.10.190.208\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.4.39\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.4.39\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2021-11-05T23:25:55Z\"\n }\n}\n" STEP: replace the image in the pod Nov 5 23:26:01.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9207 replace -f -' Nov 5 23:26:01.439: INFO: stderr: "" Nov 5 23:26:01.439: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image k8s.gcr.io/e2e-test-images/busybox:1.29-1 [AfterEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1552 Nov 5 23:26:01.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9207 delete pods e2e-test-httpd-pod' Nov 5 23:26:08.793: INFO: stderr: "" Nov 5 23:26:08.793: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:26:08.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9207" for this suite. • [SLOW TEST:13.142 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1545 should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":-1,"completed":12,"skipped":176,"failed":0} SSSSS ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":321,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:26:00.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-57a58bc8-43ba-4357-be7e-b19f0cdaf96d STEP: Creating a pod to test consume secrets Nov 5 23:26:00.460: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-24e11d7c-91ec-4b88-bf09-2e7e15c48e81" in namespace "projected-5697" to be "Succeeded or Failed" Nov 5 23:26:00.465: INFO: Pod "pod-projected-secrets-24e11d7c-91ec-4b88-bf09-2e7e15c48e81": Phase="Pending", Reason="", readiness=false. Elapsed: 5.264711ms Nov 5 23:26:02.471: INFO: Pod "pod-projected-secrets-24e11d7c-91ec-4b88-bf09-2e7e15c48e81": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010583843s Nov 5 23:26:04.474: INFO: Pod "pod-projected-secrets-24e11d7c-91ec-4b88-bf09-2e7e15c48e81": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014047886s Nov 5 23:26:06.480: INFO: Pod "pod-projected-secrets-24e11d7c-91ec-4b88-bf09-2e7e15c48e81": Phase="Pending", Reason="", readiness=false. Elapsed: 6.020009439s Nov 5 23:26:08.485: INFO: Pod "pod-projected-secrets-24e11d7c-91ec-4b88-bf09-2e7e15c48e81": Phase="Pending", Reason="", readiness=false. Elapsed: 8.024795978s Nov 5 23:26:10.491: INFO: Pod "pod-projected-secrets-24e11d7c-91ec-4b88-bf09-2e7e15c48e81": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.031049787s STEP: Saw pod success Nov 5 23:26:10.491: INFO: Pod "pod-projected-secrets-24e11d7c-91ec-4b88-bf09-2e7e15c48e81" satisfied condition "Succeeded or Failed" Nov 5 23:26:10.494: INFO: Trying to get logs from node node1 pod pod-projected-secrets-24e11d7c-91ec-4b88-bf09-2e7e15c48e81 container projected-secret-volume-test: STEP: delete the pod Nov 5 23:26:10.510: INFO: Waiting for pod pod-projected-secrets-24e11d7c-91ec-4b88-bf09-2e7e15c48e81 to disappear Nov 5 23:26:10.512: INFO: Pod pod-projected-secrets-24e11d7c-91ec-4b88-bf09-2e7e15c48e81 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:26:10.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5697" for this suite. • [SLOW TEST:10.099 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":321,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} S ------------------------------ [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:26:10.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:26:10.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8300" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Secrets should patch a secret [Conformance]","total":-1,"completed":19,"skipped":322,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:26:02.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0666 on tmpfs Nov 5 23:26:02.900: INFO: Waiting up to 5m0s for pod "pod-7770c0ec-29b6-4d51-8855-ec7f90593e90" in namespace "emptydir-5678" to be "Succeeded or Failed" Nov 5 23:26:02.903: INFO: Pod "pod-7770c0ec-29b6-4d51-8855-ec7f90593e90": Phase="Pending", Reason="", readiness=false. Elapsed: 3.005499ms Nov 5 23:26:04.908: INFO: Pod "pod-7770c0ec-29b6-4d51-8855-ec7f90593e90": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008110056s Nov 5 23:26:06.913: INFO: Pod "pod-7770c0ec-29b6-4d51-8855-ec7f90593e90": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013102984s Nov 5 23:26:08.917: INFO: Pod "pod-7770c0ec-29b6-4d51-8855-ec7f90593e90": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01665295s Nov 5 23:26:10.921: INFO: Pod "pod-7770c0ec-29b6-4d51-8855-ec7f90593e90": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.020477305s STEP: Saw pod success Nov 5 23:26:10.921: INFO: Pod "pod-7770c0ec-29b6-4d51-8855-ec7f90593e90" satisfied condition "Succeeded or Failed" Nov 5 23:26:10.925: INFO: Trying to get logs from node node1 pod pod-7770c0ec-29b6-4d51-8855-ec7f90593e90 container test-container: STEP: delete the pod Nov 5 23:26:10.938: INFO: Waiting for pod pod-7770c0ec-29b6-4d51-8855-ec7f90593e90 to disappear Nov 5 23:26:10.940: INFO: Pod pod-7770c0ec-29b6-4d51-8855-ec7f90593e90 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:26:10.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5678" for this suite. • [SLOW TEST:8.081 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":384,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:26:01.224: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-projected-all-test-volume-01ef9e41-3024-45ea-9b4e-5d98c85501b8 STEP: Creating secret with name secret-projected-all-test-volume-1b635aa6-651e-4624-a0e9-f7691ed89224 STEP: Creating a pod to test Check all projections for projected volume plugin Nov 5 23:26:01.262: INFO: Waiting up to 5m0s for pod "projected-volume-b7bc56ab-642b-441a-88cd-2bdf62eadf6f" in namespace "projected-9166" to be "Succeeded or Failed" Nov 5 23:26:01.264: INFO: Pod "projected-volume-b7bc56ab-642b-441a-88cd-2bdf62eadf6f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.428602ms Nov 5 23:26:03.268: INFO: Pod "projected-volume-b7bc56ab-642b-441a-88cd-2bdf62eadf6f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005612563s Nov 5 23:26:05.271: INFO: Pod "projected-volume-b7bc56ab-642b-441a-88cd-2bdf62eadf6f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009363269s Nov 5 23:26:07.275: INFO: Pod "projected-volume-b7bc56ab-642b-441a-88cd-2bdf62eadf6f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01353936s Nov 5 23:26:09.279: INFO: Pod "projected-volume-b7bc56ab-642b-441a-88cd-2bdf62eadf6f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.016844568s Nov 5 23:26:11.283: INFO: Pod "projected-volume-b7bc56ab-642b-441a-88cd-2bdf62eadf6f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.020823464s STEP: Saw pod success Nov 5 23:26:11.283: INFO: Pod "projected-volume-b7bc56ab-642b-441a-88cd-2bdf62eadf6f" satisfied condition "Succeeded or Failed" Nov 5 23:26:11.285: INFO: Trying to get logs from node node1 pod projected-volume-b7bc56ab-642b-441a-88cd-2bdf62eadf6f container projected-all-volume-test: STEP: delete the pod Nov 5 23:26:11.299: INFO: Waiting for pod projected-volume-b7bc56ab-642b-441a-88cd-2bdf62eadf6f to disappear Nov 5 23:26:11.301: INFO: Pod projected-volume-b7bc56ab-642b-441a-88cd-2bdf62eadf6f no longer exists [AfterEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:26:11.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9166" for this suite. • [SLOW TEST:10.085 seconds] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":308,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:26:08.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting a starting resourceVersion STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:26:13.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8302" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":-1,"completed":13,"skipped":181,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:26:10.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 5 23:26:11.027: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-0cce1d74-b5bf-4a0a-a40f-2d6d60ee1a2d" in namespace "security-context-test-8541" to be "Succeeded or Failed" Nov 5 23:26:11.030: INFO: Pod "busybox-privileged-false-0cce1d74-b5bf-4a0a-a40f-2d6d60ee1a2d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.221497ms Nov 5 23:26:13.033: INFO: Pod "busybox-privileged-false-0cce1d74-b5bf-4a0a-a40f-2d6d60ee1a2d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006356089s Nov 5 23:26:15.037: INFO: Pod "busybox-privileged-false-0cce1d74-b5bf-4a0a-a40f-2d6d60ee1a2d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01022251s Nov 5 23:26:15.037: INFO: Pod "busybox-privileged-false-0cce1d74-b5bf-4a0a-a40f-2d6d60ee1a2d" satisfied condition "Succeeded or Failed" Nov 5 23:26:15.043: INFO: Got logs for pod "busybox-privileged-false-0cce1d74-b5bf-4a0a-a40f-2d6d60ee1a2d": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:26:15.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8541" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":406,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSS ------------------------------ [BeforeEach] [sig-node] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:26:15.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:26:15.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-7469" for this suite. • ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:25:45.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Performing setup for networking test in namespace pod-network-test-9599 STEP: creating a selector STEP: Creating the service pods in kubernetes Nov 5 23:25:45.872: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 5 23:25:45.908: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:25:47.911: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:25:49.913: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:25:51.912: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:25:53.913: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:25:55.913: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 5 23:25:57.913: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 5 23:25:59.911: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 5 23:26:01.914: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 5 23:26:03.914: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 5 23:26:05.913: INFO: The status of Pod netserver-0 is Running (Ready = true) Nov 5 23:26:05.918: INFO: The status of Pod netserver-1 is Running (Ready = false) Nov 5 23:26:07.923: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Nov 5 23:26:13.961: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Nov 5 23:26:13.961: INFO: Going to poll 10.244.3.224 on port 8081 at least 0 times, with a maximum of 34 tries before failing Nov 5 23:26:13.963: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.3.224 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9599 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 5 23:26:13.963: INFO: >>> kubeConfig: /root/.kube/config Nov 5 23:26:15.358: INFO: Found all 1 expected endpoints: [netserver-0] Nov 5 23:26:15.358: INFO: Going to poll 10.244.4.35 on port 8081 at least 0 times, with a maximum of 34 tries before failing Nov 5 23:26:15.361: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.4.35 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9599 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 5 23:26:15.361: INFO: >>> kubeConfig: /root/.kube/config Nov 5 23:26:16.460: INFO: Found all 1 expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:26:16.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9599" for this suite. • [SLOW TEST:30.628 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":565,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:26:11.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap configmap-3668/configmap-test-99ee24f7-979e-4dac-bdfd-7371dc53752f STEP: Creating a pod to test consume configMaps Nov 5 23:26:11.356: INFO: Waiting up to 5m0s for pod "pod-configmaps-6d9b1609-457d-487f-b915-f1d03cddf75d" in namespace "configmap-3668" to be "Succeeded or Failed" Nov 5 23:26:11.359: INFO: Pod "pod-configmaps-6d9b1609-457d-487f-b915-f1d03cddf75d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.764742ms Nov 5 23:26:13.363: INFO: Pod "pod-configmaps-6d9b1609-457d-487f-b915-f1d03cddf75d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006930185s Nov 5 23:26:15.365: INFO: Pod "pod-configmaps-6d9b1609-457d-487f-b915-f1d03cddf75d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009654743s Nov 5 23:26:17.370: INFO: Pod "pod-configmaps-6d9b1609-457d-487f-b915-f1d03cddf75d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014097423s STEP: Saw pod success Nov 5 23:26:17.370: INFO: Pod "pod-configmaps-6d9b1609-457d-487f-b915-f1d03cddf75d" satisfied condition "Succeeded or Failed" Nov 5 23:26:17.372: INFO: Trying to get logs from node node2 pod pod-configmaps-6d9b1609-457d-487f-b915-f1d03cddf75d container env-test: STEP: delete the pod Nov 5 23:26:17.389: INFO: Waiting for pod pod-configmaps-6d9b1609-457d-487f-b915-f1d03cddf75d to disappear Nov 5 23:26:17.392: INFO: Pod pod-configmaps-6d9b1609-457d-487f-b915-f1d03cddf75d no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:26:17.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3668" for this suite. • [SLOW TEST:6.080 seconds] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":312,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:26:13.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 5 23:26:13.865: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:26:19.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5166" for this suite. • [SLOW TEST:5.568 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":-1,"completed":14,"skipped":211,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:26:19.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a test event STEP: listing events in all namespaces STEP: listing events in test namespace STEP: listing events with field selection filtering on source STEP: listing events with field selection filtering on reportingController STEP: getting the test event STEP: patching the test event STEP: getting the test event STEP: updating the test event STEP: getting the test event STEP: deleting the test event STEP: listing events in all namespaces STEP: listing events in test namespace [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:26:19.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-210" for this suite. • ------------------------------ {"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":15,"skipped":250,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:26:10.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod Nov 5 23:26:10.620: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:26:19.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7892" for this suite. • [SLOW TEST:9.110 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":-1,"completed":20,"skipped":328,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSS ------------------------------ {"msg":"PASSED [sig-node] Lease lease API should be available [Conformance]","total":-1,"completed":17,"skipped":411,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:26:15.133: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0666 on node default medium Nov 5 23:26:15.168: INFO: Waiting up to 5m0s for pod "pod-5fc4a380-7125-4655-a72c-70f91c6053da" in namespace "emptydir-5359" to be "Succeeded or Failed" Nov 5 23:26:15.174: INFO: Pod "pod-5fc4a380-7125-4655-a72c-70f91c6053da": Phase="Pending", Reason="", readiness=false. Elapsed: 5.944745ms Nov 5 23:26:17.177: INFO: Pod "pod-5fc4a380-7125-4655-a72c-70f91c6053da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008588553s Nov 5 23:26:19.181: INFO: Pod "pod-5fc4a380-7125-4655-a72c-70f91c6053da": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012339277s Nov 5 23:26:21.185: INFO: Pod "pod-5fc4a380-7125-4655-a72c-70f91c6053da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016585538s STEP: Saw pod success Nov 5 23:26:21.185: INFO: Pod "pod-5fc4a380-7125-4655-a72c-70f91c6053da" satisfied condition "Succeeded or Failed" Nov 5 23:26:21.188: INFO: Trying to get logs from node node2 pod pod-5fc4a380-7125-4655-a72c-70f91c6053da container test-container: STEP: delete the pod Nov 5 23:26:21.199: INFO: Waiting for pod pod-5fc4a380-7125-4655-a72c-70f91c6053da to disappear Nov 5 23:26:21.201: INFO: Pod pod-5fc4a380-7125-4655-a72c-70f91c6053da no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:26:21.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5359" for this suite. • [SLOW TEST:6.076 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":411,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:26:17.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 5 23:26:17.473: INFO: Creating deployment "test-recreate-deployment" Nov 5 23:26:17.477: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Nov 5 23:26:17.483: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Nov 5 23:26:19.489: INFO: Waiting deployment "test-recreate-deployment" to complete Nov 5 23:26:19.491: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751577, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751577, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751577, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751577, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6cb8b65c46\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 5 23:26:21.495: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Nov 5 23:26:21.501: INFO: Updating deployment test-recreate-deployment Nov 5 23:26:21.501: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Nov 5 23:26:21.539: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-1232 53db61b0-3cfa-497b-bfa6-08535a143cb2 44307 2 2021-11-05 23:26:17 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-11-05 23:26:21 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-11-05 23:26:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00551ba98 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2021-11-05 23:26:21 +0000 UTC,LastTransitionTime:2021-11-05 23:26:21 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-85d47dcb4" is progressing.,LastUpdateTime:2021-11-05 23:26:21 +0000 UTC,LastTransitionTime:2021-11-05 23:26:17 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Nov 5 23:26:21.542: INFO: New ReplicaSet "test-recreate-deployment-85d47dcb4" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-85d47dcb4 deployment-1232 9c218c21-c3b4-4500-9dfa-1e25626b5fb5 44306 1 2021-11-05 23:26:21 +0000 UTC map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 53db61b0-3cfa-497b-bfa6-08535a143cb2 0xc005618070 0xc005618071}] [] [{kube-controller-manager Update apps/v1 2021-11-05 23:26:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53db61b0-3cfa-497b-bfa6-08535a143cb2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 85d47dcb4,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005618108 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Nov 5 23:26:21.542: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Nov 5 23:26:21.543: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-6cb8b65c46 deployment-1232 f5a96f87-2535-4f2e-ac07-6219d964f595 44296 2 2021-11-05 23:26:17 +0000 UTC map[name:sample-pod-3 pod-template-hash:6cb8b65c46] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 53db61b0-3cfa-497b-bfa6-08535a143cb2 0xc00551bf17 0xc00551bf18}] [] [{kube-controller-manager Update apps/v1 2021-11-05 23:26:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53db61b0-3cfa-497b-bfa6-08535a143cb2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6cb8b65c46,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:6cb8b65c46] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00551bfe8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Nov 5 23:26:21.545: INFO: Pod "test-recreate-deployment-85d47dcb4-fftgv" is not available: &Pod{ObjectMeta:{test-recreate-deployment-85d47dcb4-fftgv test-recreate-deployment-85d47dcb4- deployment-1232 3bf10030-bddb-4ce7-b010-4a1a1c170e72 44303 0 2021-11-05 23:26:21 +0000 UTC map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-recreate-deployment-85d47dcb4 9c218c21-c3b4-4500-9dfa-1e25626b5fb5 0xc00561871f 0xc005618730}] [] [{kube-controller-manager Update v1 2021-11-05 23:26:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9c218c21-c3b4-4500-9dfa-1e25626b5fb5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-ft8cn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ft8cn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:26:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:26:21.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1232" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":26,"skipped":339,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":66,"failed":0} [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:21:23.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W1105 23:21:23.085200 27 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should not schedule jobs when suspended [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a suspended cronjob STEP: Ensuring no jobs are scheduled STEP: Ensuring no job exists by listing jobs explicitly STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:26:23.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-4091" for this suite. • [SLOW TEST:300.045 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should not schedule jobs when suspended [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","total":-1,"completed":2,"skipped":66,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:26:19.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Nov 5 23:26:19.600: INFO: Waiting up to 5m0s for pod "security-context-8832eae7-5efc-4019-a053-851e01d522b9" in namespace "security-context-9731" to be "Succeeded or Failed" Nov 5 23:26:19.603: INFO: Pod "security-context-8832eae7-5efc-4019-a053-851e01d522b9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.865645ms Nov 5 23:26:21.609: INFO: Pod "security-context-8832eae7-5efc-4019-a053-851e01d522b9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00908513s Nov 5 23:26:23.612: INFO: Pod "security-context-8832eae7-5efc-4019-a053-851e01d522b9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012188238s STEP: Saw pod success Nov 5 23:26:23.612: INFO: Pod "security-context-8832eae7-5efc-4019-a053-851e01d522b9" satisfied condition "Succeeded or Failed" Nov 5 23:26:23.615: INFO: Trying to get logs from node node1 pod security-context-8832eae7-5efc-4019-a053-851e01d522b9 container test-container: STEP: delete the pod Nov 5 23:26:23.628: INFO: Waiting for pod security-context-8832eae7-5efc-4019-a053-851e01d522b9 to disappear Nov 5 23:26:23.630: INFO: Pod security-context-8832eae7-5efc-4019-a053-851e01d522b9 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:26:23.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-9731" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":16,"skipped":254,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:26:23.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 5 23:26:23.179: INFO: Creating pod... Nov 5 23:26:23.194: INFO: Pod Quantity: 1 Status: Pending Nov 5 23:26:24.198: INFO: Pod Quantity: 1 Status: Pending Nov 5 23:26:25.198: INFO: Pod Quantity: 1 Status: Pending Nov 5 23:26:26.199: INFO: Pod Quantity: 1 Status: Pending Nov 5 23:26:27.197: INFO: Pod Status: Running Nov 5 23:26:27.197: INFO: Creating service... Nov 5 23:26:27.203: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-5104/pods/agnhost/proxy/some/path/with/DELETE Nov 5 23:26:27.205: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE Nov 5 23:26:27.205: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-5104/pods/agnhost/proxy/some/path/with/GET Nov 5 23:26:27.208: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET Nov 5 23:26:27.208: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-5104/pods/agnhost/proxy/some/path/with/HEAD Nov 5 23:26:27.210: INFO: http.Client request:HEAD | StatusCode:200 Nov 5 23:26:27.210: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-5104/pods/agnhost/proxy/some/path/with/OPTIONS Nov 5 23:26:27.213: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS Nov 5 23:26:27.213: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-5104/pods/agnhost/proxy/some/path/with/PATCH Nov 5 23:26:27.215: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH Nov 5 23:26:27.215: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-5104/pods/agnhost/proxy/some/path/with/POST Nov 5 23:26:27.217: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST Nov 5 23:26:27.218: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-5104/pods/agnhost/proxy/some/path/with/PUT Nov 5 23:26:27.220: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT Nov 5 23:26:27.220: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-5104/services/test-service/proxy/some/path/with/DELETE Nov 5 23:26:27.224: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE Nov 5 23:26:27.224: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-5104/services/test-service/proxy/some/path/with/GET Nov 5 23:26:27.227: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET Nov 5 23:26:27.227: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-5104/services/test-service/proxy/some/path/with/HEAD Nov 5 23:26:27.230: INFO: http.Client request:HEAD | StatusCode:200 Nov 5 23:26:27.231: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-5104/services/test-service/proxy/some/path/with/OPTIONS Nov 5 23:26:27.235: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS Nov 5 23:26:27.235: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-5104/services/test-service/proxy/some/path/with/PATCH Nov 5 23:26:27.239: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH Nov 5 23:26:27.239: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-5104/services/test-service/proxy/some/path/with/POST Nov 5 23:26:27.243: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST Nov 5 23:26:27.243: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-5104/services/test-service/proxy/some/path/with/PUT Nov 5 23:26:27.246: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT [AfterEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:26:27.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-5104" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]","total":-1,"completed":3,"skipped":89,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:26:23.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 5 23:26:23.699: INFO: The status of Pod busybox-host-aliases90236ad2-bd40-4fe5-b9e7-2e513392942b is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:26:25.702: INFO: The status of Pod busybox-host-aliases90236ad2-bd40-4fe5-b9e7-2e513392942b is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:26:27.705: INFO: The status of Pod busybox-host-aliases90236ad2-bd40-4fe5-b9e7-2e513392942b is Running (Ready = true) [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:26:27.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6246" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":266,"failed":0} SSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:26:21.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:26:28.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7147" for this suite. • [SLOW TEST:7.036 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":-1,"completed":27,"skipped":373,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:26:21.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Setting up the test STEP: Creating hostNetwork=false pod Nov 5 23:26:21.257: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:26:23.261: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:26:25.260: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:26:27.261: INFO: The status of Pod test-pod is Running (Ready = true) STEP: Creating hostNetwork=true pod Nov 5 23:26:27.278: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:26:29.285: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:26:31.282: INFO: The status of Pod test-host-network-pod is Running (Ready = true) STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Nov 5 23:26:31.284: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5287 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 5 23:26:31.284: INFO: >>> kubeConfig: /root/.kube/config Nov 5 23:26:31.378: INFO: Exec stderr: "" Nov 5 23:26:31.378: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5287 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 5 23:26:31.378: INFO: >>> kubeConfig: /root/.kube/config Nov 5 23:26:31.453: INFO: Exec stderr: "" Nov 5 23:26:31.453: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5287 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 5 23:26:31.453: INFO: >>> kubeConfig: /root/.kube/config Nov 5 23:26:31.531: INFO: Exec stderr: "" Nov 5 23:26:31.531: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5287 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 5 23:26:31.531: INFO: >>> kubeConfig: /root/.kube/config Nov 5 23:26:31.611: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Nov 5 23:26:31.611: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5287 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 5 23:26:31.611: INFO: >>> kubeConfig: /root/.kube/config Nov 5 23:26:31.688: INFO: Exec stderr: "" Nov 5 23:26:31.688: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5287 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 5 23:26:31.688: INFO: >>> kubeConfig: /root/.kube/config Nov 5 23:26:31.772: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Nov 5 23:26:31.772: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5287 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 5 23:26:31.772: INFO: >>> kubeConfig: /root/.kube/config Nov 5 23:26:31.867: INFO: Exec stderr: "" Nov 5 23:26:31.867: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5287 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 5 23:26:31.867: INFO: >>> kubeConfig: /root/.kube/config Nov 5 23:26:31.938: INFO: Exec stderr: "" Nov 5 23:26:31.938: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5287 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 5 23:26:31.938: INFO: >>> kubeConfig: /root/.kube/config Nov 5 23:26:32.012: INFO: Exec stderr: "" Nov 5 23:26:32.012: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5287 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 5 23:26:32.012: INFO: >>> kubeConfig: /root/.kube/config Nov 5 23:26:32.112: INFO: Exec stderr: "" [AfterEach] [sig-node] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:26:32.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-5287" for this suite. • [SLOW TEST:10.898 seconds] [sig-node] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":416,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:26:27.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should create a PodDisruptionBudget [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pdb STEP: Waiting for the pdb to be processed STEP: updating the pdb STEP: Waiting for the pdb to be processed STEP: patching the pdb STEP: Waiting for the pdb to be processed STEP: Waiting for the pdb to be deleted [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:26:33.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-2562" for this suite. • [SLOW TEST:6.077 seconds] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should create a PodDisruptionBudget [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]","total":-1,"completed":4,"skipped":126,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:26:28.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0644 on node default medium Nov 5 23:26:28.716: INFO: Waiting up to 5m0s for pod "pod-ca21b039-7b78-43ae-b37a-dcd4e08baea0" in namespace "emptydir-8651" to be "Succeeded or Failed" Nov 5 23:26:28.718: INFO: Pod "pod-ca21b039-7b78-43ae-b37a-dcd4e08baea0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.539523ms Nov 5 23:26:30.720: INFO: Pod "pod-ca21b039-7b78-43ae-b37a-dcd4e08baea0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004787585s Nov 5 23:26:32.723: INFO: Pod "pod-ca21b039-7b78-43ae-b37a-dcd4e08baea0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007855465s Nov 5 23:26:34.726: INFO: Pod "pod-ca21b039-7b78-43ae-b37a-dcd4e08baea0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.010543647s STEP: Saw pod success Nov 5 23:26:34.726: INFO: Pod "pod-ca21b039-7b78-43ae-b37a-dcd4e08baea0" satisfied condition "Succeeded or Failed" Nov 5 23:26:34.729: INFO: Trying to get logs from node node1 pod pod-ca21b039-7b78-43ae-b37a-dcd4e08baea0 container test-container: STEP: delete the pod Nov 5 23:26:34.844: INFO: Waiting for pod pod-ca21b039-7b78-43ae-b37a-dcd4e08baea0 to disappear Nov 5 23:26:34.846: INFO: Pod pod-ca21b039-7b78-43ae-b37a-dcd4e08baea0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:26:34.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8651" for this suite. • [SLOW TEST:6.175 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":383,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:26:19.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Nov 5 23:26:20.115: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Nov 5 23:26:22.123: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751580, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751580, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751580, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751580, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 5 23:26:24.126: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751580, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751580, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751580, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751580, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 5 23:26:27.133: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 5 23:26:27.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:26:35.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-9588" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:15.578 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":-1,"completed":21,"skipped":332,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SS ------------------------------ [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:26:34.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 5 23:26:34.908: INFO: The status of Pod pod-secrets-0f799365-9593-49bd-a8c8-4f2027e300e6 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:26:36.911: INFO: The status of Pod pod-secrets-0f799365-9593-49bd-a8c8-4f2027e300e6 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:26:38.912: INFO: The status of Pod pod-secrets-0f799365-9593-49bd-a8c8-4f2027e300e6 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:26:40.913: INFO: The status of Pod pod-secrets-0f799365-9593-49bd-a8c8-4f2027e300e6 is Running (Ready = true) STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:26:40.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-599" for this suite. • [SLOW TEST:6.071 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not conflict [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":-1,"completed":29,"skipped":388,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:23:24.200: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-2536 [It] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a new StatefulSet Nov 5 23:23:24.232: INFO: Found 0 stateful pods, waiting for 3 Nov 5 23:23:34.236: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Nov 5 23:23:34.236: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Nov 5 23:23:34.236: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Nov 5 23:23:44.240: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Nov 5 23:23:44.240: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Nov 5 23:23:44.240: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Nov 5 23:23:44.250: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2536 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Nov 5 23:23:44.494: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Nov 5 23:23:44.494: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Nov 5 23:23:44.494: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-1 Nov 5 23:23:54.523: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Nov 5 23:24:04.539: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2536 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 5 23:24:04.793: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Nov 5 23:24:04.793: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Nov 5 23:24:04.793: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Nov 5 23:24:14.810: INFO: Waiting for StatefulSet statefulset-2536/ss2 to complete update Nov 5 23:24:14.810: INFO: Waiting for Pod statefulset-2536/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Nov 5 23:24:14.810: INFO: Waiting for Pod statefulset-2536/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Nov 5 23:24:14.810: INFO: Waiting for Pod statefulset-2536/ss2-2 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Nov 5 23:24:24.816: INFO: Waiting for StatefulSet statefulset-2536/ss2 to complete update Nov 5 23:24:24.816: INFO: Waiting for Pod statefulset-2536/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Nov 5 23:24:24.816: INFO: Waiting for Pod statefulset-2536/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Nov 5 23:24:34.816: INFO: Waiting for StatefulSet statefulset-2536/ss2 to complete update Nov 5 23:24:34.817: INFO: Waiting for Pod statefulset-2536/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Nov 5 23:24:34.817: INFO: Waiting for Pod statefulset-2536/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Nov 5 23:24:44.817: INFO: Waiting for StatefulSet statefulset-2536/ss2 to complete update Nov 5 23:24:44.817: INFO: Waiting for Pod statefulset-2536/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 STEP: Rolling back to a previous revision Nov 5 23:24:54.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2536 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Nov 5 23:24:55.063: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Nov 5 23:24:55.063: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Nov 5 23:24:55.063: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Nov 5 23:25:05.093: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Nov 5 23:25:15.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2536 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 5 23:25:15.368: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Nov 5 23:25:15.368: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Nov 5 23:25:15.368: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Nov 5 23:25:25.384: INFO: Waiting for StatefulSet statefulset-2536/ss2 to complete update Nov 5 23:25:25.384: INFO: Waiting for Pod statefulset-2536/ss2-0 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Nov 5 23:25:25.384: INFO: Waiting for Pod statefulset-2536/ss2-1 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Nov 5 23:25:25.384: INFO: Waiting for Pod statefulset-2536/ss2-2 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Nov 5 23:25:35.390: INFO: Waiting for StatefulSet statefulset-2536/ss2 to complete update Nov 5 23:25:35.390: INFO: Waiting for Pod statefulset-2536/ss2-0 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Nov 5 23:25:35.390: INFO: Waiting for Pod statefulset-2536/ss2-1 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Nov 5 23:25:45.390: INFO: Waiting for StatefulSet statefulset-2536/ss2 to complete update Nov 5 23:25:45.391: INFO: Waiting for Pod statefulset-2536/ss2-0 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Nov 5 23:25:55.392: INFO: Waiting for StatefulSet statefulset-2536/ss2 to complete update [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Nov 5 23:26:05.391: INFO: Deleting all statefulset in ns statefulset-2536 Nov 5 23:26:05.393: INFO: Scaling statefulset ss2 to 0 Nov 5 23:26:45.406: INFO: Waiting for statefulset status.replicas updated to 0 Nov 5 23:26:45.408: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:26:45.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2536" for this suite. • [SLOW TEST:201.226 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":-1,"completed":7,"skipped":179,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:26:45.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name secret-emptykey-test-6d0619ab-10e1-4ef5-8a15-8773057db5c8 [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:26:45.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2502" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]","total":-1,"completed":8,"skipped":213,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:26:45.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:26:45.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-5416" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":-1,"completed":9,"skipped":220,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:26:40.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Nov 5 23:26:41.443: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Nov 5 23:26:43.453: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751601, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751601, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751601, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751601, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 5 23:26:46.463: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:26:46.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8634" for this suite. STEP: Destroying namespace "webhook-8634-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.558 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":-1,"completed":30,"skipped":412,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:26:16.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-4327 [It] should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating statefulset ss in namespace statefulset-4327 Nov 5 23:26:16.526: INFO: Found 0 stateful pods, waiting for 1 Nov 5 23:26:26.532: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified STEP: Patch a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Nov 5 23:26:26.552: INFO: Deleting all statefulset in ns statefulset-4327 Nov 5 23:26:26.558: INFO: Scaling statefulset ss to 0 Nov 5 23:26:46.570: INFO: Waiting for statefulset status.replicas updated to 0 Nov 5 23:26:46.572: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:26:46.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4327" for this suite. • [SLOW TEST:30.093 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":-1,"completed":30,"skipped":576,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:24:16.761: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod liveness-76ba662b-a8fe-46bb-bcf2-af2435f2ea84 in namespace container-probe-2635 Nov 5 23:24:20.801: INFO: Started pod liveness-76ba662b-a8fe-46bb-bcf2-af2435f2ea84 in namespace container-probe-2635 STEP: checking the pod's current state and verifying that restartCount is present Nov 5 23:24:20.803: INFO: Initial restart count of pod liveness-76ba662b-a8fe-46bb-bcf2-af2435f2ea84 is 0 Nov 5 23:24:38.839: INFO: Restart count of pod container-probe-2635/liveness-76ba662b-a8fe-46bb-bcf2-af2435f2ea84 is now 1 (18.035578736s elapsed) Nov 5 23:24:58.877: INFO: Restart count of pod container-probe-2635/liveness-76ba662b-a8fe-46bb-bcf2-af2435f2ea84 is now 2 (38.074367508s elapsed) Nov 5 23:25:18.930: INFO: Restart count of pod container-probe-2635/liveness-76ba662b-a8fe-46bb-bcf2-af2435f2ea84 is now 3 (58.126824883s elapsed) Nov 5 23:25:40.994: INFO: Restart count of pod container-probe-2635/liveness-76ba662b-a8fe-46bb-bcf2-af2435f2ea84 is now 4 (1m20.19139425s elapsed) Nov 5 23:26:51.154: INFO: Restart count of pod container-probe-2635/liveness-76ba662b-a8fe-46bb-bcf2-af2435f2ea84 is now 5 (2m30.350892848s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:26:51.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2635" for this suite. • [SLOW TEST:154.407 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":273,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:26:33.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-986.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-986.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-986.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-986.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Nov 5 23:26:37.482: INFO: DNS probes using dns-test-7828b5fc-8f13-4bb8-ab18-09d5b6b0ad7f succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-986.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-986.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-986.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-986.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Nov 5 23:26:49.522: INFO: DNS probes using dns-test-75c56031-48f5-4aa2-abbd-1b7c22f48ed0 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-986.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-986.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-986.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-986.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Nov 5 23:26:55.569: INFO: DNS probes using dns-test-84f96b24-c25e-4db7-9577-9a9af2816788 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:26:55.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-986" for this suite. • [SLOW TEST:22.166 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":-1,"completed":5,"skipped":140,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:21:27.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W1105 23:21:27.197927 29 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ForbidConcurrent cronjob STEP: Ensuring a job is scheduled STEP: Ensuring exactly one is scheduled STEP: Ensuring exactly one running job exists by listing jobs explicitly STEP: Ensuring no more jobs are scheduled STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:27:01.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-6800" for this suite. • [SLOW TEST:334.054 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","total":-1,"completed":4,"skipped":48,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:26:45.581: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. Nov 5 23:26:45.633: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:26:47.636: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:26:49.636: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:26:51.636: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the pod with lifecycle hook Nov 5 23:26:51.651: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:26:53.655: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:26:55.655: INFO: The status of Pod pod-with-prestop-http-hook is Running (Ready = true) STEP: delete the pod with lifecycle hook Nov 5 23:26:55.661: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Nov 5 23:26:55.663: INFO: Pod pod-with-prestop-http-hook still exists Nov 5 23:26:57.664: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Nov 5 23:26:57.668: INFO: Pod pod-with-prestop-http-hook still exists Nov 5 23:26:59.664: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Nov 5 23:26:59.666: INFO: Pod pod-with-prestop-http-hook still exists Nov 5 23:27:01.665: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Nov 5 23:27:01.668: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:27:01.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6890" for this suite. • [SLOW TEST:16.103 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":232,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:27:01.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingressclass STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingressclass.go:149 [It] should support creating IngressClass API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Nov 5 23:27:01.776: INFO: starting watch STEP: patching STEP: updating Nov 5 23:27:01.784: INFO: waiting for watch events with expected annotations Nov 5 23:27:01.784: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:27:01.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingressclass-4454" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":-1,"completed":11,"skipped":267,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:26:55.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Nov 5 23:26:55.940: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Nov 5 23:26:57.949: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751615, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751615, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751615, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751615, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 5 23:26:59.953: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751615, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751615, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751615, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751615, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 5 23:27:02.960: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Nov 5 23:27:02.973: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:27:02.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8773" for this suite. STEP: Destroying namespace "webhook-8773-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.419 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":-1,"completed":6,"skipped":145,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:26:51.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. Nov 5 23:26:51.242: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:26:53.245: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:26:55.247: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the pod with lifecycle hook Nov 5 23:26:55.266: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:26:57.270: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:26:59.269: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:27:01.270: INFO: The status of Pod pod-with-prestop-exec-hook is Running (Ready = true) STEP: delete the pod with lifecycle hook Nov 5 23:27:01.277: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Nov 5 23:27:01.280: INFO: Pod pod-with-prestop-exec-hook still exists Nov 5 23:27:03.280: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Nov 5 23:27:03.284: INFO: Pod pod-with-prestop-exec-hook still exists Nov 5 23:27:05.280: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Nov 5 23:27:05.283: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:27:05.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8428" for this suite. • [SLOW TEST:14.122 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":296,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:27:01.849: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test substitution in container's args Nov 5 23:27:01.880: INFO: Waiting up to 5m0s for pod "var-expansion-6418e9a9-342e-45cd-a479-c91a26b9bc11" in namespace "var-expansion-2948" to be "Succeeded or Failed" Nov 5 23:27:01.883: INFO: Pod "var-expansion-6418e9a9-342e-45cd-a479-c91a26b9bc11": Phase="Pending", Reason="", readiness=false. Elapsed: 2.397465ms Nov 5 23:27:03.886: INFO: Pod "var-expansion-6418e9a9-342e-45cd-a479-c91a26b9bc11": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005393693s Nov 5 23:27:05.890: INFO: Pod "var-expansion-6418e9a9-342e-45cd-a479-c91a26b9bc11": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009144058s STEP: Saw pod success Nov 5 23:27:05.890: INFO: Pod "var-expansion-6418e9a9-342e-45cd-a479-c91a26b9bc11" satisfied condition "Succeeded or Failed" Nov 5 23:27:05.891: INFO: Trying to get logs from node node1 pod var-expansion-6418e9a9-342e-45cd-a479-c91a26b9bc11 container dapi-container: STEP: delete the pod Nov 5 23:27:05.902: INFO: Waiting for pod var-expansion-6418e9a9-342e-45cd-a479-c91a26b9bc11 to disappear Nov 5 23:27:05.904: INFO: Pod var-expansion-6418e9a9-342e-45cd-a479-c91a26b9bc11 no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:27:05.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2948" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":293,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:27:01.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Nov 5 23:27:01.314: INFO: Waiting up to 5m0s for pod "downwardapi-volume-828a9376-d504-407c-be0b-fb2e031d92af" in namespace "downward-api-6472" to be "Succeeded or Failed" Nov 5 23:27:01.316: INFO: Pod "downwardapi-volume-828a9376-d504-407c-be0b-fb2e031d92af": Phase="Pending", Reason="", readiness=false. Elapsed: 1.932123ms Nov 5 23:27:03.319: INFO: Pod "downwardapi-volume-828a9376-d504-407c-be0b-fb2e031d92af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004874709s Nov 5 23:27:05.321: INFO: Pod "downwardapi-volume-828a9376-d504-407c-be0b-fb2e031d92af": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007462233s Nov 5 23:27:07.325: INFO: Pod "downwardapi-volume-828a9376-d504-407c-be0b-fb2e031d92af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.011074781s STEP: Saw pod success Nov 5 23:27:07.325: INFO: Pod "downwardapi-volume-828a9376-d504-407c-be0b-fb2e031d92af" satisfied condition "Succeeded or Failed" Nov 5 23:27:07.327: INFO: Trying to get logs from node node1 pod downwardapi-volume-828a9376-d504-407c-be0b-fb2e031d92af container client-container: STEP: delete the pod Nov 5 23:27:07.337: INFO: Waiting for pod downwardapi-volume-828a9376-d504-407c-be0b-fb2e031d92af to disappear Nov 5 23:27:07.339: INFO: Pod downwardapi-volume-828a9376-d504-407c-be0b-fb2e031d92af no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:27:07.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6472" for this suite. • [SLOW TEST:6.065 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":79,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:27:03.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:27:09.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3998" for this suite. • [SLOW TEST:6.064 seconds] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":167,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:27:09.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should provide secure master service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:27:09.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7016" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • ------------------------------ {"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":-1,"completed":8,"skipped":192,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:27:07.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslicemirroring STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslicemirroring.go:39 [It] should mirror a custom Endpoints resource through create update and delete [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: mirroring a new custom Endpoint STEP: mirroring an update to a custom Endpoint Nov 5 23:27:07.407: INFO: Expected EndpointSlice to have 10.2.3.4 as address, got 10.1.2.3 STEP: mirroring deletion of a custom Endpoint Nov 5 23:27:09.417: INFO: Waiting for 0 EndpointSlices to exist, got 1 [AfterEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:27:11.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslicemirroring-2851" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","total":-1,"completed":6,"skipped":91,"failed":0} SSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:27:09.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 5 23:27:09.255: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:27:17.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8661" for this suite. • [SLOW TEST:8.135 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 custom resource defaulting for requests and from storage works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":-1,"completed":9,"skipped":213,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:27:11.437: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with configMap that has name projected-configmap-test-upd-b95f7cea-4e04-467c-bac9-a0d5c0683c8e STEP: Creating the pod Nov 5 23:27:11.475: INFO: The status of Pod pod-projected-configmaps-668ae3dc-abf1-472d-a15a-10f5a41559e5 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:27:13.478: INFO: The status of Pod pod-projected-configmaps-668ae3dc-abf1-472d-a15a-10f5a41559e5 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:27:15.478: INFO: The status of Pod pod-projected-configmaps-668ae3dc-abf1-472d-a15a-10f5a41559e5 is Running (Ready = true) STEP: Updating configmap projected-configmap-test-upd-b95f7cea-4e04-467c-bac9-a0d5c0683c8e STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:27:18.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5347" for this suite. • [SLOW TEST:7.137 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":94,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:27:18.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be immutable if `immutable` field is set [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:27:18.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1627" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":8,"skipped":99,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:27:17.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 5 23:27:17.408: INFO: Waiting up to 5m0s for pod "busybox-user-65534-42c13c36-6603-4bb7-ace2-c4db81258b04" in namespace "security-context-test-2644" to be "Succeeded or Failed" Nov 5 23:27:17.414: INFO: Pod "busybox-user-65534-42c13c36-6603-4bb7-ace2-c4db81258b04": Phase="Pending", Reason="", readiness=false. Elapsed: 5.294279ms Nov 5 23:27:19.417: INFO: Pod "busybox-user-65534-42c13c36-6603-4bb7-ace2-c4db81258b04": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008332549s Nov 5 23:27:21.420: INFO: Pod "busybox-user-65534-42c13c36-6603-4bb7-ace2-c4db81258b04": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011924325s Nov 5 23:27:21.420: INFO: Pod "busybox-user-65534-42c13c36-6603-4bb7-ace2-c4db81258b04" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:27:21.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2644" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":214,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:26:27.732: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod Nov 5 23:26:27.754: INFO: PodSpec: initContainers in spec.initContainers Nov 5 23:27:21.484: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-70f9ceb2-89d4-4b23-b477-d5d16b3786e9", GenerateName:"", Namespace:"init-container-1098", SelfLink:"", UID:"a55d5916-87c2-4f5a-995d-ba2a595e2517", ResourceVersion:"46044", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63771751587, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"754301505"}, Annotations:map[string]string{"k8s.v1.cni.cncf.io/network-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.238\"\n ],\n \"mac\": \"12:ec:e5:2c:d1:4e\",\n \"default\": true,\n \"dns\": {}\n}]", "k8s.v1.cni.cncf.io/networks-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.238\"\n ],\n \"mac\": \"12:ec:e5:2c:d1:4e\",\n \"default\": true,\n \"dns\": {}\n}]", "kubernetes.io/psp":"collectd"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00475aac8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00475aae0)}, v1.ManagedFieldsEntry{Manager:"multus", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00475aaf8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00475ab10)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00475ab28), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00475ab40)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-2sgj5", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc002ee5a40), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-2sgj5", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-2sgj5", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.4.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-2sgj5", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc004b08b78), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"node1", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc003b06310), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc004b08c00)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc004b08c20)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc004b08c28), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc004b08c2c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc00327a410), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751587, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751587, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751587, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751587, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.10.190.207", PodIP:"10.244.3.238", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.3.238"}}, StartTime:(*v1.Time)(0xc00475ab70), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc003b063f0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc003b06460)}, Ready:false, RestartCount:3, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"docker-pullable://k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592", ContainerID:"docker://c3268d4fa7a573924d96bb1f1695b69df6bed475001b60b7e79c4c7a539834af", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002ee5ac0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002ee5aa0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.4.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc004b08caf)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:27:21.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1098" for this suite. • [SLOW TEST:53.762 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":-1,"completed":18,"skipped":269,"failed":0} S ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:27:18.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Nov 5 23:27:21.730: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:27:21.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3218" for this suite. • ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:26:35.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Nov 5 23:26:35.326: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Nov 5 23:26:53.574: INFO: >>> kubeConfig: /root/.kube/config Nov 5 23:27:02.143: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:27:22.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7624" for this suite. • [SLOW TEST:47.248 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:27:21.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-map-3bfcf3b8-94c6-4256-a61c-c657c4528335 STEP: Creating a pod to test consume secrets Nov 5 23:27:21.536: INFO: Waiting up to 5m0s for pod "pod-secrets-a4b9dbf3-fe55-40eb-87df-6135d93ac488" in namespace "secrets-2303" to be "Succeeded or Failed" Nov 5 23:27:21.538: INFO: Pod "pod-secrets-a4b9dbf3-fe55-40eb-87df-6135d93ac488": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089357ms Nov 5 23:27:23.540: INFO: Pod "pod-secrets-a4b9dbf3-fe55-40eb-87df-6135d93ac488": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004563544s Nov 5 23:27:25.544: INFO: Pod "pod-secrets-a4b9dbf3-fe55-40eb-87df-6135d93ac488": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008051367s Nov 5 23:27:27.548: INFO: Pod "pod-secrets-a4b9dbf3-fe55-40eb-87df-6135d93ac488": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.012560628s STEP: Saw pod success Nov 5 23:27:27.548: INFO: Pod "pod-secrets-a4b9dbf3-fe55-40eb-87df-6135d93ac488" satisfied condition "Succeeded or Failed" Nov 5 23:27:27.551: INFO: Trying to get logs from node node1 pod pod-secrets-a4b9dbf3-fe55-40eb-87df-6135d93ac488 container secret-volume-test: STEP: delete the pod Nov 5 23:27:27.569: INFO: Waiting for pod pod-secrets-a4b9dbf3-fe55-40eb-87df-6135d93ac488 to disappear Nov 5 23:27:27.571: INFO: Pod pod-secrets-a4b9dbf3-fe55-40eb-87df-6135d93ac488 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:27:27.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2303" for this suite. • [SLOW TEST:6.084 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":270,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:27:05.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1386 STEP: creating an pod Nov 5 23:27:05.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2394 run logs-generator --image=k8s.gcr.io/e2e-test-images/agnhost:2.32 --restart=Never -- logs-generator --log-lines-total 100 --run-duration 20s' Nov 5 23:27:05.628: INFO: stderr: "" Nov 5 23:27:05.628: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Waiting for log generator to start. Nov 5 23:27:05.628: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Nov 5 23:27:05.628: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-2394" to be "running and ready, or succeeded" Nov 5 23:27:05.630: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.152257ms Nov 5 23:27:07.634: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006015123s Nov 5 23:27:09.637: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008943428s Nov 5 23:27:11.641: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 6.012601098s Nov 5 23:27:11.641: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Nov 5 23:27:11.641: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Nov 5 23:27:11.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2394 logs logs-generator logs-generator' Nov 5 23:27:11.785: INFO: stderr: "" Nov 5 23:27:11.785: INFO: stdout: "I1105 23:27:09.603618 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/ns/pods/cqp 537\nI1105 23:27:09.803676 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/default/pods/s4gh 528\nI1105 23:27:10.004512 1 logs_generator.go:76] 2 GET /api/v1/namespaces/default/pods/m22h 516\nI1105 23:27:10.203754 1 logs_generator.go:76] 3 POST /api/v1/namespaces/default/pods/8rc 528\nI1105 23:27:10.404244 1 logs_generator.go:76] 4 POST /api/v1/namespaces/default/pods/4gb 568\nI1105 23:27:10.604547 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/bpz7 390\nI1105 23:27:10.803817 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/ns/pods/kqnk 540\nI1105 23:27:11.004155 1 logs_generator.go:76] 7 POST /api/v1/namespaces/kube-system/pods/w8r 570\nI1105 23:27:11.204521 1 logs_generator.go:76] 8 POST /api/v1/namespaces/ns/pods/vkwb 491\nI1105 23:27:11.403787 1 logs_generator.go:76] 9 POST /api/v1/namespaces/default/pods/5kks 263\nI1105 23:27:11.604074 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/kube-system/pods/vkx7 494\n" STEP: limiting log lines Nov 5 23:27:11.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2394 logs logs-generator logs-generator --tail=1' Nov 5 23:27:11.940: INFO: stderr: "" Nov 5 23:27:11.940: INFO: stdout: "I1105 23:27:11.804369 1 logs_generator.go:76] 11 GET /api/v1/namespaces/kube-system/pods/n5d 361\n" Nov 5 23:27:11.940: INFO: got output "I1105 23:27:11.804369 1 logs_generator.go:76] 11 GET /api/v1/namespaces/kube-system/pods/n5d 361\n" STEP: limiting log bytes Nov 5 23:27:11.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2394 logs logs-generator logs-generator --limit-bytes=1' Nov 5 23:27:12.080: INFO: stderr: "" Nov 5 23:27:12.080: INFO: stdout: "I" Nov 5 23:27:12.080: INFO: got output "I" STEP: exposing timestamps Nov 5 23:27:12.080: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2394 logs logs-generator logs-generator --tail=1 --timestamps' Nov 5 23:27:12.253: INFO: stderr: "" Nov 5 23:27:12.253: INFO: stdout: "2021-11-05T23:27:12.209663597Z I1105 23:27:12.203679 1 logs_generator.go:76] 13 GET /api/v1/namespaces/ns/pods/4f4n 368\n" Nov 5 23:27:12.253: INFO: got output "2021-11-05T23:27:12.209663597Z I1105 23:27:12.203679 1 logs_generator.go:76] 13 GET /api/v1/namespaces/ns/pods/4f4n 368\n" STEP: restricting to a time range Nov 5 23:27:14.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2394 logs logs-generator logs-generator --since=1s' Nov 5 23:27:14.929: INFO: stderr: "" Nov 5 23:27:14.929: INFO: stdout: "I1105 23:27:14.004358 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/default/pods/ckps 295\nI1105 23:27:14.203721 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/default/pods/pztd 550\nI1105 23:27:14.404093 1 logs_generator.go:76] 24 GET /api/v1/namespaces/kube-system/pods/2f2 510\nI1105 23:27:14.604608 1 logs_generator.go:76] 25 PUT /api/v1/namespaces/ns/pods/8m86 582\nI1105 23:27:14.803926 1 logs_generator.go:76] 26 PUT /api/v1/namespaces/kube-system/pods/ttlx 590\n" Nov 5 23:27:14.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2394 logs logs-generator logs-generator --since=24h' Nov 5 23:27:15.090: INFO: stderr: "" Nov 5 23:27:15.090: INFO: stdout: "I1105 23:27:09.603618 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/ns/pods/cqp 537\nI1105 23:27:09.803676 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/default/pods/s4gh 528\nI1105 23:27:10.004512 1 logs_generator.go:76] 2 GET /api/v1/namespaces/default/pods/m22h 516\nI1105 23:27:10.203754 1 logs_generator.go:76] 3 POST /api/v1/namespaces/default/pods/8rc 528\nI1105 23:27:10.404244 1 logs_generator.go:76] 4 POST /api/v1/namespaces/default/pods/4gb 568\nI1105 23:27:10.604547 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/bpz7 390\nI1105 23:27:10.803817 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/ns/pods/kqnk 540\nI1105 23:27:11.004155 1 logs_generator.go:76] 7 POST /api/v1/namespaces/kube-system/pods/w8r 570\nI1105 23:27:11.204521 1 logs_generator.go:76] 8 POST /api/v1/namespaces/ns/pods/vkwb 491\nI1105 23:27:11.403787 1 logs_generator.go:76] 9 POST /api/v1/namespaces/default/pods/5kks 263\nI1105 23:27:11.604074 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/kube-system/pods/vkx7 494\nI1105 23:27:11.804369 1 logs_generator.go:76] 11 GET /api/v1/namespaces/kube-system/pods/n5d 361\nI1105 23:27:12.004579 1 logs_generator.go:76] 12 POST /api/v1/namespaces/kube-system/pods/72qv 526\nI1105 23:27:12.203679 1 logs_generator.go:76] 13 GET /api/v1/namespaces/ns/pods/4f4n 368\nI1105 23:27:12.404059 1 logs_generator.go:76] 14 POST /api/v1/namespaces/default/pods/pc9w 308\nI1105 23:27:12.604196 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/default/pods/kqf 507\nI1105 23:27:12.804516 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/kube-system/pods/8fmd 571\nI1105 23:27:13.003759 1 logs_generator.go:76] 17 GET /api/v1/namespaces/ns/pods/qccq 336\nI1105 23:27:13.204180 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/default/pods/6xz 250\nI1105 23:27:13.404463 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/default/pods/975g 259\nI1105 23:27:13.603803 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/default/pods/vtx 545\nI1105 23:27:13.804169 1 logs_generator.go:76] 21 PUT /api/v1/namespaces/default/pods/2bp 353\nI1105 23:27:14.004358 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/default/pods/ckps 295\nI1105 23:27:14.203721 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/default/pods/pztd 550\nI1105 23:27:14.404093 1 logs_generator.go:76] 24 GET /api/v1/namespaces/kube-system/pods/2f2 510\nI1105 23:27:14.604608 1 logs_generator.go:76] 25 PUT /api/v1/namespaces/ns/pods/8m86 582\nI1105 23:27:14.803926 1 logs_generator.go:76] 26 PUT /api/v1/namespaces/kube-system/pods/ttlx 590\nI1105 23:27:15.004215 1 logs_generator.go:76] 27 GET /api/v1/namespaces/default/pods/nlf 225\n" [AfterEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1391 Nov 5 23:27:15.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2394 delete pod logs-generator' Nov 5 23:27:28.949: INFO: stderr: "" Nov 5 23:27:28.949: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:27:28.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2394" for this suite. • [SLOW TEST:23.514 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1383 should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":-1,"completed":19,"skipped":368,"failed":0} SSS ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":125,"failed":0} [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:27:21.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 5 23:27:21.775: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7849 create -f -' Nov 5 23:27:22.152: INFO: stderr: "" Nov 5 23:27:22.152: INFO: stdout: "replicationcontroller/agnhost-primary created\n" Nov 5 23:27:22.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7849 create -f -' Nov 5 23:27:22.465: INFO: stderr: "" Nov 5 23:27:22.465: INFO: stdout: "service/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Nov 5 23:27:23.469: INFO: Selector matched 1 pods for map[app:agnhost] Nov 5 23:27:23.469: INFO: Found 0 / 1 Nov 5 23:27:24.469: INFO: Selector matched 1 pods for map[app:agnhost] Nov 5 23:27:24.469: INFO: Found 0 / 1 Nov 5 23:27:25.469: INFO: Selector matched 1 pods for map[app:agnhost] Nov 5 23:27:25.469: INFO: Found 0 / 1 Nov 5 23:27:26.470: INFO: Selector matched 1 pods for map[app:agnhost] Nov 5 23:27:26.470: INFO: Found 0 / 1 Nov 5 23:27:27.469: INFO: Selector matched 1 pods for map[app:agnhost] Nov 5 23:27:27.469: INFO: Found 0 / 1 Nov 5 23:27:28.469: INFO: Selector matched 1 pods for map[app:agnhost] Nov 5 23:27:28.469: INFO: Found 1 / 1 Nov 5 23:27:28.469: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Nov 5 23:27:28.474: INFO: Selector matched 1 pods for map[app:agnhost] Nov 5 23:27:28.474: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Nov 5 23:27:28.474: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7849 describe pod agnhost-primary-l2mx2' Nov 5 23:27:28.671: INFO: stderr: "" Nov 5 23:27:28.671: INFO: stdout: "Name: agnhost-primary-l2mx2\nNamespace: kubectl-7849\nPriority: 0\nNode: node1/10.10.190.207\nStart Time: Fri, 05 Nov 2021 23:27:22 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.252\"\n ],\n \"mac\": \"92:ac:0d:57:77:8f\",\n \"default\": true,\n \"dns\": {}\n }]\n k8s.v1.cni.cncf.io/networks-status:\n [{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.252\"\n ],\n \"mac\": \"92:ac:0d:57:77:8f\",\n \"default\": true,\n \"dns\": {}\n }]\n kubernetes.io/psp: collectd\nStatus: Running\nIP: 10.244.3.252\nIPs:\n IP: 10.244.3.252\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: docker://f1c4c7ad9ac051348aeed55b96e1c5eab1627f2a128de4cd0938b20329358c5b\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.32\n Image ID: docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Fri, 05 Nov 2021 23:27:26 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-g66fl (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-g66fl:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 6s default-scheduler Successfully assigned kubectl-7849/agnhost-primary-l2mx2 to node1\n Normal Pulling 3s kubelet Pulling image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\"\n Normal Pulled 3s kubelet Successfully pulled image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\" in 419.004461ms\n Normal Created 2s kubelet Created container agnhost-primary\n Normal Started 2s kubelet Started container agnhost-primary\n" Nov 5 23:27:28.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7849 describe rc agnhost-primary' Nov 5 23:27:28.869: INFO: stderr: "" Nov 5 23:27:28.869: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-7849\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.32\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 6s replication-controller Created pod: agnhost-primary-l2mx2\n" Nov 5 23:27:28.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7849 describe service agnhost-primary' Nov 5 23:27:29.044: INFO: stderr: "" Nov 5 23:27:29.044: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-7849\nLabels: app=agnhost\n role=primary\nAnnotations: \nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.233.47.84\nIPs: 10.233.47.84\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.3.252:6379\nSession Affinity: None\nEvents: \n" Nov 5 23:27:29.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7849 describe node master1' Nov 5 23:27:29.267: INFO: stderr: "" Nov 5 23:27:29.267: INFO: stdout: "Name: master1\nRoles: control-plane,master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=master1\n kubernetes.io/os=linux\n node-role.kubernetes.io/control-plane=\n node-role.kubernetes.io/master=\n node.kubernetes.io/exclude-from-external-load-balancers=\nAnnotations: flannel.alpha.coreos.com/backend-data: null\n flannel.alpha.coreos.com/backend-type: host-gw\n flannel.alpha.coreos.com/kube-subnet-manager: true\n flannel.alpha.coreos.com/public-ip: 10.10.190.202\n kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Fri, 05 Nov 2021 20:58:52 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: master1\n AcquireTime: \n RenewTime: Fri, 05 Nov 2021 23:27:28 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n NetworkUnavailable False Fri, 05 Nov 2021 21:04:29 +0000 Fri, 05 Nov 2021 21:04:29 +0000 FlannelIsUp Flannel is running on this node\n MemoryPressure False Fri, 05 Nov 2021 23:27:29 +0000 Fri, 05 Nov 2021 20:58:50 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Fri, 05 Nov 2021 23:27:29 +0000 Fri, 05 Nov 2021 20:58:50 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Fri, 05 Nov 2021 23:27:29 +0000 Fri, 05 Nov 2021 20:58:50 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Fri, 05 Nov 2021 23:27:29 +0000 Fri, 05 Nov 2021 21:01:42 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 10.10.190.202\n Hostname: master1\nCapacity:\n cpu: 80\n ephemeral-storage: 439913340Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 196518328Ki\n pods: 110\nAllocatable:\n cpu: 79550m\n ephemeral-storage: 405424133473\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 195629496Ki\n pods: 110\nSystem Info:\n Machine ID: b66bbe4d404942179ce344aa1da0c494\n System UUID: 00ACFB60-0631-E711-906E-0017A4403562\n Boot ID: b59c0f0e-9c14-460c-acfa-6e83037bd04e\n Kernel Version: 3.10.0-1160.45.1.el7.x86_64\n OS Image: CentOS Linux 7 (Core)\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: docker://20.10.10\n Kubelet Version: v1.21.1\n Kube-Proxy Version: v1.21.1\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system container-registry-65d7c44b96-dwrs5 0 (0%) 0 (0%) 0 (0%) 0 (0%) 141m\n kube-system coredns-8474476ff8-nq2jw 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 145m\n kube-system kube-apiserver-master1 250m (0%) 0 (0%) 0 (0%) 0 (0%) 138m\n kube-system kube-controller-manager-master1 200m (0%) 0 (0%) 0 (0%) 0 (0%) 147m\n kube-system kube-flannel-hkkhj 150m (0%) 300m (0%) 64M (0%) 500M (0%) 145m\n kube-system kube-multus-ds-amd64-rr699 100m (0%) 100m (0%) 90Mi (0%) 90Mi (0%) 145m\n kube-system kube-proxy-r4cf7 0 (0%) 0 (0%) 0 (0%) 0 (0%) 146m\n kube-system kube-scheduler-master1 100m (0%) 0 (0%) 0 (0%) 0 (0%) 129m\n monitoring node-exporter-lgdzv 112m (0%) 270m (0%) 200Mi (0%) 220Mi (0%) 132m\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 1012m (1%) 670m (0%)\n memory 431140Ki (0%) 1003316480 (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" Nov 5 23:27:29.268: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7849 describe namespace kubectl-7849' Nov 5 23:27:29.446: INFO: stderr: "" Nov 5 23:27:29.446: INFO: stdout: "Name: kubectl-7849\nLabels: e2e-framework=kubectl\n e2e-run=19b48af2-1912-4f93-af9b-268709dc4d4b\n kubernetes.io/metadata.name=kubectl-7849\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:27:29.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7849" for this suite. • [SLOW TEST:7.706 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1084 should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":-1,"completed":10,"skipped":125,"failed":0} SSSSSSSSS ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":-1,"completed":22,"skipped":334,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:27:22.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] Replace and Patch tests [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 5 23:27:22.585: INFO: Pod name sample-pod: Found 0 pods out of 1 Nov 5 23:27:27.588: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running STEP: Scaling up "test-rs" replicaset Nov 5 23:27:29.600: INFO: Updating replica set "test-rs" STEP: patching the ReplicaSet Nov 5 23:27:29.605: INFO: observed ReplicaSet test-rs in namespace replicaset-2461 with ReadyReplicas 1, AvailableReplicas 1 Nov 5 23:27:29.615: INFO: observed ReplicaSet test-rs in namespace replicaset-2461 with ReadyReplicas 1, AvailableReplicas 1 Nov 5 23:27:29.623: INFO: observed ReplicaSet test-rs in namespace replicaset-2461 with ReadyReplicas 1, AvailableReplicas 1 Nov 5 23:27:29.626: INFO: observed ReplicaSet test-rs in namespace replicaset-2461 with ReadyReplicas 1, AvailableReplicas 1 Nov 5 23:27:31.893: INFO: observed ReplicaSet test-rs in namespace replicaset-2461 with ReadyReplicas 2, AvailableReplicas 2 Nov 5 23:27:33.729: INFO: observed Replicaset test-rs in namespace replicaset-2461 with ReadyReplicas 3 found true [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:27:33.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-2461" for this suite. • [SLOW TEST:11.182 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Replace and Patch tests [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet Replace and Patch tests [Conformance]","total":-1,"completed":23,"skipped":334,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:27:33.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service multi-endpoint-test in namespace services-621 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-621 to expose endpoints map[] Nov 5 23:27:33.824: INFO: Failed go get Endpoints object: endpoints "multi-endpoint-test" not found Nov 5 23:27:34.830: INFO: successfully validated that service multi-endpoint-test in namespace services-621 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-621 Nov 5 23:27:34.963: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:27:36.968: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:27:38.966: INFO: The status of Pod pod1 is Running (Ready = true) STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-621 to expose endpoints map[pod1:[100]] Nov 5 23:27:38.982: INFO: successfully validated that service multi-endpoint-test in namespace services-621 exposes endpoints map[pod1:[100]] STEP: Creating pod pod2 in namespace services-621 Nov 5 23:27:38.997: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:27:41.003: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:27:43.000: INFO: The status of Pod pod2 is Running (Ready = true) STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-621 to expose endpoints map[pod1:[100] pod2:[101]] Nov 5 23:27:43.012: INFO: successfully validated that service multi-endpoint-test in namespace services-621 exposes endpoints map[pod1:[100] pod2:[101]] STEP: Deleting pod pod1 in namespace services-621 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-621 to expose endpoints map[pod2:[101]] Nov 5 23:27:44.030: INFO: successfully validated that service multi-endpoint-test in namespace services-621 exposes endpoints map[pod2:[101]] STEP: Deleting pod pod2 in namespace services-621 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-621 to expose endpoints map[] Nov 5 23:27:44.043: INFO: successfully validated that service multi-endpoint-test in namespace services-621 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:27:44.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-621" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:10.266 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":-1,"completed":24,"skipped":365,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:26:46.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W1105 23:26:47.684153 28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 5 23:27:49.701: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:27:49.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7681" for this suite. • [SLOW TEST:63.085 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":-1,"completed":31,"skipped":599,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:27:27.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-configmap-x6cp STEP: Creating a pod to test atomic-volume-subpath Nov 5 23:27:27.652: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-x6cp" in namespace "subpath-4472" to be "Succeeded or Failed" Nov 5 23:27:27.656: INFO: Pod "pod-subpath-test-configmap-x6cp": Phase="Pending", Reason="", readiness=false. Elapsed: 3.655326ms Nov 5 23:27:29.660: INFO: Pod "pod-subpath-test-configmap-x6cp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007356576s Nov 5 23:27:31.663: INFO: Pod "pod-subpath-test-configmap-x6cp": Phase="Running", Reason="", readiness=true. Elapsed: 4.011085609s Nov 5 23:27:33.668: INFO: Pod "pod-subpath-test-configmap-x6cp": Phase="Running", Reason="", readiness=true. Elapsed: 6.015962515s Nov 5 23:27:35.671: INFO: Pod "pod-subpath-test-configmap-x6cp": Phase="Running", Reason="", readiness=true. Elapsed: 8.018822669s Nov 5 23:27:37.676: INFO: Pod "pod-subpath-test-configmap-x6cp": Phase="Running", Reason="", readiness=true. Elapsed: 10.023436576s Nov 5 23:27:39.680: INFO: Pod "pod-subpath-test-configmap-x6cp": Phase="Running", Reason="", readiness=true. Elapsed: 12.027198848s Nov 5 23:27:41.683: INFO: Pod "pod-subpath-test-configmap-x6cp": Phase="Running", Reason="", readiness=true. Elapsed: 14.031020376s Nov 5 23:27:43.687: INFO: Pod "pod-subpath-test-configmap-x6cp": Phase="Running", Reason="", readiness=true. Elapsed: 16.034429053s Nov 5 23:27:45.692: INFO: Pod "pod-subpath-test-configmap-x6cp": Phase="Running", Reason="", readiness=true. Elapsed: 18.039301985s Nov 5 23:27:47.696: INFO: Pod "pod-subpath-test-configmap-x6cp": Phase="Running", Reason="", readiness=true. Elapsed: 20.043108172s Nov 5 23:27:49.699: INFO: Pod "pod-subpath-test-configmap-x6cp": Phase="Running", Reason="", readiness=true. Elapsed: 22.046500708s Nov 5 23:27:51.703: INFO: Pod "pod-subpath-test-configmap-x6cp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.050235569s STEP: Saw pod success Nov 5 23:27:51.703: INFO: Pod "pod-subpath-test-configmap-x6cp" satisfied condition "Succeeded or Failed" Nov 5 23:27:51.705: INFO: Trying to get logs from node node1 pod pod-subpath-test-configmap-x6cp container test-container-subpath-configmap-x6cp: STEP: delete the pod Nov 5 23:27:51.718: INFO: Waiting for pod pod-subpath-test-configmap-x6cp to disappear Nov 5 23:27:51.720: INFO: Pod pod-subpath-test-configmap-x6cp no longer exists STEP: Deleting pod pod-subpath-test-configmap-x6cp Nov 5 23:27:51.720: INFO: Deleting pod "pod-subpath-test-configmap-x6cp" in namespace "subpath-4472" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:27:51.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4472" for this suite. • [SLOW TEST:24.125 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":-1,"completed":20,"skipped":281,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:27:51.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-2f7378f1-d6ac-4fa9-92ac-5fa65b34c1e5 STEP: Creating a pod to test consume configMaps Nov 5 23:27:51.814: INFO: Waiting up to 5m0s for pod "pod-configmaps-5bdafcc1-1af0-4d1f-955e-cd8e2c8b17fb" in namespace "configmap-5868" to be "Succeeded or Failed" Nov 5 23:27:51.817: INFO: Pod "pod-configmaps-5bdafcc1-1af0-4d1f-955e-cd8e2c8b17fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.587047ms Nov 5 23:27:53.821: INFO: Pod "pod-configmaps-5bdafcc1-1af0-4d1f-955e-cd8e2c8b17fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006766448s Nov 5 23:27:55.826: INFO: Pod "pod-configmaps-5bdafcc1-1af0-4d1f-955e-cd8e2c8b17fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011552138s STEP: Saw pod success Nov 5 23:27:55.826: INFO: Pod "pod-configmaps-5bdafcc1-1af0-4d1f-955e-cd8e2c8b17fb" satisfied condition "Succeeded or Failed" Nov 5 23:27:55.829: INFO: Trying to get logs from node node2 pod pod-configmaps-5bdafcc1-1af0-4d1f-955e-cd8e2c8b17fb container agnhost-container: STEP: delete the pod Nov 5 23:27:55.841: INFO: Waiting for pod pod-configmaps-5bdafcc1-1af0-4d1f-955e-cd8e2c8b17fb to disappear Nov 5 23:27:55.844: INFO: Pod pod-configmaps-5bdafcc1-1af0-4d1f-955e-cd8e2c8b17fb no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:27:55.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5868" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":306,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:27:55.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-719ba263-f643-41e8-bb7e-e39943a0b2b6 STEP: Creating a pod to test consume secrets Nov 5 23:27:55.921: INFO: Waiting up to 5m0s for pod "pod-secrets-307d5e81-93e3-4a17-b762-54578e41368c" in namespace "secrets-6126" to be "Succeeded or Failed" Nov 5 23:27:55.925: INFO: Pod "pod-secrets-307d5e81-93e3-4a17-b762-54578e41368c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.363519ms Nov 5 23:27:57.929: INFO: Pod "pod-secrets-307d5e81-93e3-4a17-b762-54578e41368c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008278117s Nov 5 23:27:59.932: INFO: Pod "pod-secrets-307d5e81-93e3-4a17-b762-54578e41368c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011748089s STEP: Saw pod success Nov 5 23:27:59.932: INFO: Pod "pod-secrets-307d5e81-93e3-4a17-b762-54578e41368c" satisfied condition "Succeeded or Failed" Nov 5 23:27:59.936: INFO: Trying to get logs from node node2 pod pod-secrets-307d5e81-93e3-4a17-b762-54578e41368c container secret-volume-test: STEP: delete the pod Nov 5 23:27:59.948: INFO: Waiting for pod pod-secrets-307d5e81-93e3-4a17-b762-54578e41368c to disappear Nov 5 23:27:59.951: INFO: Pod pod-secrets-307d5e81-93e3-4a17-b762-54578e41368c no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:27:59.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6126" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":321,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:26:32.128: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W1105 23:26:32.152962 32 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should replace jobs when ReplaceConcurrent [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ReplaceConcurrent cronjob STEP: Ensuring a job is scheduled STEP: Ensuring exactly one is scheduled STEP: Ensuring exactly one running job exists by listing jobs explicitly STEP: Ensuring the job is replaced with a new one STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:28:00.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-2449" for this suite. • [SLOW TEST:88.053 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should replace jobs when ReplaceConcurrent [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","total":-1,"completed":20,"skipped":419,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:28:00.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should find a service from listing all namespaces [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: fetching services [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:28:00.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-631" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • ------------------------------ {"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":-1,"completed":21,"skipped":421,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:28:00.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should test the lifecycle of an Endpoint [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating an Endpoint STEP: waiting for available Endpoint STEP: listing all Endpoints STEP: updating the Endpoint STEP: fetching the Endpoint STEP: patching the Endpoint STEP: fetching the Endpoint STEP: deleting the Endpoint by Collection STEP: waiting for Endpoint deletion STEP: fetching the Endpoint [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:28:00.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5549" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • ------------------------------ {"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":-1,"completed":22,"skipped":438,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:27:59.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir volume type on node default medium Nov 5 23:28:00.003: INFO: Waiting up to 5m0s for pod "pod-38baf8b2-e40b-457a-9b85-03c3a54bb091" in namespace "emptydir-5448" to be "Succeeded or Failed" Nov 5 23:28:00.007: INFO: Pod "pod-38baf8b2-e40b-457a-9b85-03c3a54bb091": Phase="Pending", Reason="", readiness=false. Elapsed: 2.975518ms Nov 5 23:28:02.009: INFO: Pod "pod-38baf8b2-e40b-457a-9b85-03c3a54bb091": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005103859s Nov 5 23:28:04.012: INFO: Pod "pod-38baf8b2-e40b-457a-9b85-03c3a54bb091": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008327112s STEP: Saw pod success Nov 5 23:28:04.012: INFO: Pod "pod-38baf8b2-e40b-457a-9b85-03c3a54bb091" satisfied condition "Succeeded or Failed" Nov 5 23:28:04.015: INFO: Trying to get logs from node node1 pod pod-38baf8b2-e40b-457a-9b85-03c3a54bb091 container test-container: STEP: delete the pod Nov 5 23:28:04.027: INFO: Waiting for pod pod-38baf8b2-e40b-457a-9b85-03c3a54bb091 to disappear Nov 5 23:28:04.030: INFO: Pod pod-38baf8b2-e40b-457a-9b85-03c3a54bb091 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:28:04.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5448" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":326,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:28:00.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Given a Pod with a 'name' label pod-adoption-release is created Nov 5 23:28:00.402: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:28:02.406: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:28:04.406: INFO: The status of Pod pod-adoption-release is Running (Ready = true) STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Nov 5 23:28:05.421: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:28:06.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-7916" for this suite. • [SLOW TEST:6.076 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":-1,"completed":23,"skipped":462,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:27:49.765: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a service nodeport-service with the type=NodePort in namespace services-2688 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-2688 STEP: creating replication controller externalsvc in namespace services-2688 I1105 23:27:49.805504 28 runners.go:190] Created replication controller with name: externalsvc, namespace: services-2688, replica count: 2 I1105 23:27:52.857338 28 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Nov 5 23:27:52.872: INFO: Creating new exec pod Nov 5 23:27:56.890: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2688 exec execpod28mjz -- /bin/sh -x -c nslookup nodeport-service.services-2688.svc.cluster.local' Nov 5 23:27:57.150: INFO: stderr: "+ nslookup nodeport-service.services-2688.svc.cluster.local\n" Nov 5 23:27:57.150: INFO: stdout: "Server:\t\t10.233.0.3\nAddress:\t10.233.0.3#53\n\nnodeport-service.services-2688.svc.cluster.local\tcanonical name = externalsvc.services-2688.svc.cluster.local.\nName:\texternalsvc.services-2688.svc.cluster.local\nAddress: 10.233.50.227\n\n" STEP: deleting ReplicationController externalsvc in namespace services-2688, will wait for the garbage collector to delete the pods Nov 5 23:27:57.207: INFO: Deleting ReplicationController externalsvc took: 3.682771ms Nov 5 23:27:57.309: INFO: Terminating ReplicationController externalsvc pods took: 101.223318ms Nov 5 23:28:08.818: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:28:08.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2688" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:19.066 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":-1,"completed":32,"skipped":629,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:28:04.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-7795bc4c-46a8-421c-8432-ea52859001fd STEP: Creating a pod to test consume secrets Nov 5 23:28:04.114: INFO: Waiting up to 5m0s for pod "pod-secrets-fd8ec6be-141a-43cd-a524-be06078075ad" in namespace "secrets-6227" to be "Succeeded or Failed" Nov 5 23:28:04.120: INFO: Pod "pod-secrets-fd8ec6be-141a-43cd-a524-be06078075ad": Phase="Pending", Reason="", readiness=false. Elapsed: 6.365919ms Nov 5 23:28:06.124: INFO: Pod "pod-secrets-fd8ec6be-141a-43cd-a524-be06078075ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009854233s Nov 5 23:28:08.127: INFO: Pod "pod-secrets-fd8ec6be-141a-43cd-a524-be06078075ad": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013509413s Nov 5 23:28:10.131: INFO: Pod "pod-secrets-fd8ec6be-141a-43cd-a524-be06078075ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016746155s STEP: Saw pod success Nov 5 23:28:10.131: INFO: Pod "pod-secrets-fd8ec6be-141a-43cd-a524-be06078075ad" satisfied condition "Succeeded or Failed" Nov 5 23:28:10.133: INFO: Trying to get logs from node node1 pod pod-secrets-fd8ec6be-141a-43cd-a524-be06078075ad container secret-volume-test: STEP: delete the pod Nov 5 23:28:10.194: INFO: Waiting for pod pod-secrets-fd8ec6be-141a-43cd-a524-be06078075ad to disappear Nov 5 23:28:10.196: INFO: Pod pod-secrets-fd8ec6be-141a-43cd-a524-be06078075ad no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:28:10.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6227" for this suite. • [SLOW TEST:6.126 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":346,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:28:10.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check is all data is printed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 5 23:28:10.237: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4004 version' Nov 5 23:28:10.353: INFO: stderr: "" Nov 5 23:28:10.353: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"21\", GitVersion:\"v1.21.5\", GitCommit:\"aea7bbadd2fc0cd689de94a54e5b7b758869d691\", GitTreeState:\"clean\", BuildDate:\"2021-09-15T21:10:45Z\", GoVersion:\"go1.16.8\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"21\", GitVersion:\"v1.21.1\", GitCommit:\"5e58841cce77d4bc13713ad2b91fa0d961e69192\", GitTreeState:\"clean\", BuildDate:\"2021-05-12T14:12:29Z\", GoVersion:\"go1.16.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:28:10.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4004" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":-1,"completed":25,"skipped":351,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:27:21.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W1105 23:27:22.534787 27 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 5 23:28:24.552: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:28:24.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8742" for this suite. • [SLOW TEST:63.096 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":-1,"completed":11,"skipped":232,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:23:17.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-7258 [It] Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-7258 STEP: Creating statefulset with conflicting port in namespace statefulset-7258 STEP: Waiting until pod test-pod will start running in namespace statefulset-7258 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-7258 Nov 5 23:28:23.371: FAIL: Pod ss-0 expected to be re-created at least once Full Stack Trace k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001fcb800) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc001fcb800) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc001fcb800, 0x70e7b58) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Nov 5 23:28:23.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7258 describe po test-pod' Nov 5 23:28:23.552: INFO: stderr: "" Nov 5 23:28:23.552: INFO: stdout: "Name: test-pod\nNamespace: statefulset-7258\nPriority: 0\nNode: node1/10.10.190.207\nStart Time: Fri, 05 Nov 2021 23:23:17 +0000\nLabels: \nAnnotations: k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.192\"\n ],\n \"mac\": \"62:79:43:40:97:f4\",\n \"default\": true,\n \"dns\": {}\n }]\n k8s.v1.cni.cncf.io/networks-status:\n [{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.192\"\n ],\n \"mac\": \"62:79:43:40:97:f4\",\n \"default\": true,\n \"dns\": {}\n }]\n kubernetes.io/psp: privileged\nStatus: Running\nIP: 10.244.3.192\nIPs:\n IP: 10.244.3.192\nContainers:\n webserver:\n Container ID: docker://9567e1d31b5e33f75c1e57e5757e7b462d5035818774308941792df66962ccb2\n Image: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\n Image ID: docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50\n Port: 21017/TCP\n Host Port: 21017/TCP\n State: Running\n Started: Fri, 05 Nov 2021 23:23:20 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-g7cbw (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-g7cbw:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Pulling 5m4s kubelet Pulling image \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\"\n Normal Pulled 5m3s kubelet Successfully pulled image \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\" in 630.909862ms\n Normal Created 5m3s kubelet Created container webserver\n Normal Started 5m3s kubelet Started container webserver\n" Nov 5 23:28:23.552: INFO: Output of kubectl describe test-pod: Name: test-pod Namespace: statefulset-7258 Priority: 0 Node: node1/10.10.190.207 Start Time: Fri, 05 Nov 2021 23:23:17 +0000 Labels: Annotations: k8s.v1.cni.cncf.io/network-status: [{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.192" ], "mac": "62:79:43:40:97:f4", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: [{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.192" ], "mac": "62:79:43:40:97:f4", "default": true, "dns": {} }] kubernetes.io/psp: privileged Status: Running IP: 10.244.3.192 IPs: IP: 10.244.3.192 Containers: webserver: Container ID: docker://9567e1d31b5e33f75c1e57e5757e7b462d5035818774308941792df66962ccb2 Image: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Image ID: docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 Port: 21017/TCP Host Port: 21017/TCP State: Running Started: Fri, 05 Nov 2021 23:23:20 +0000 Ready: True Restart Count: 0 Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-g7cbw (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-g7cbw: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Pulling 5m4s kubelet Pulling image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" Normal Pulled 5m3s kubelet Successfully pulled image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" in 630.909862ms Normal Created 5m3s kubelet Created container webserver Normal Started 5m3s kubelet Started container webserver Nov 5 23:28:23.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7258 logs test-pod --tail=100' Nov 5 23:28:23.713: INFO: stderr: "" Nov 5 23:28:23.713: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.3.192. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.3.192. Set the 'ServerName' directive globally to suppress this message\n[Fri Nov 05 23:23:20.925196 2021] [mpm_event:notice] [pid 1:tid 140274029837160] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Fri Nov 05 23:23:20.925230 2021] [core:notice] [pid 1:tid 140274029837160] AH00094: Command line: 'httpd -D FOREGROUND'\n" Nov 5 23:28:23.713: INFO: Last 100 log lines of test-pod: AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.3.192. Set the 'ServerName' directive globally to suppress this message AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.3.192. Set the 'ServerName' directive globally to suppress this message [Fri Nov 05 23:23:20.925196 2021] [mpm_event:notice] [pid 1:tid 140274029837160] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations [Fri Nov 05 23:23:20.925230 2021] [core:notice] [pid 1:tid 140274029837160] AH00094: Command line: 'httpd -D FOREGROUND' Nov 5 23:28:23.713: INFO: Deleting all statefulset in ns statefulset-7258 Nov 5 23:28:23.716: INFO: Scaling statefulset ss to 0 Nov 5 23:28:23.723: INFO: Waiting for statefulset status.replicas updated to 0 Nov 5 23:28:23.726: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "statefulset-7258". STEP: Found 7 events. Nov 5 23:28:23.736: INFO: At 2021-11-05 23:23:17 +0000 UTC - event for ss: {statefulset-controller } FailedCreate: create Pod ss-0 in StatefulSet ss failed error: pods "ss-0" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9100] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9103-9104]] Nov 5 23:28:23.736: INFO: At 2021-11-05 23:23:17 +0000 UTC - event for ss: {statefulset-controller } FailedCreate: create Pod ss-0 in StatefulSet ss failed error: pods "ss-0" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9103-9104] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9100] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: []] Nov 5 23:28:23.736: INFO: At 2021-11-05 23:23:19 +0000 UTC - event for ss: {statefulset-controller } FailedCreate: create Pod ss-0 in StatefulSet ss failed error: pods "ss-0" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9103-9104] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9100]] Nov 5 23:28:23.736: INFO: At 2021-11-05 23:23:19 +0000 UTC - event for test-pod: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" Nov 5 23:28:23.736: INFO: At 2021-11-05 23:23:20 +0000 UTC - event for test-pod: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" in 630.909862ms Nov 5 23:28:23.736: INFO: At 2021-11-05 23:23:20 +0000 UTC - event for test-pod: {kubelet node1} Created: Created container webserver Nov 5 23:28:23.736: INFO: At 2021-11-05 23:23:20 +0000 UTC - event for test-pod: {kubelet node1} Started: Started container webserver Nov 5 23:28:23.738: INFO: POD NODE PHASE GRACE CONDITIONS Nov 5 23:28:23.738: INFO: test-pod node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:23:17 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:23:21 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:23:21 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:23:17 +0000 UTC }] Nov 5 23:28:23.738: INFO: Nov 5 23:28:23.743: INFO: Logging node info for node master1 Nov 5 23:28:23.744: INFO: Node Info: &Node{ObjectMeta:{master1 acabf68f-e6fa-4376-87a7-953399a106b3 47166 0 2021-11-05 20:58:52 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-11-05 20:58:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-11-05 21:01:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-11-05 21:01:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-11-05 21:06:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-05 21:04:29 +0000 UTC,LastTransitionTime:2021-11-05 21:04:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-05 23:28:19 +0000 UTC,LastTransitionTime:2021-11-05 20:58:50 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-05 23:28:19 +0000 UTC,LastTransitionTime:2021-11-05 20:58:50 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-05 23:28:19 +0000 UTC,LastTransitionTime:2021-11-05 20:58:50 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-05 23:28:19 +0000 UTC,LastTransitionTime:2021-11-05 21:01:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b66bbe4d404942179ce344aa1da0c494,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:b59c0f0e-9c14-460c-acfa-6e83037bd04e,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724468800,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:50c0d65dc6b2d7618e01dd2fb431c2312ad7490d0461d040b112b04809b07401 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:2012644549052fa07c43b0d19f320c871a25e105d0b23e33645e4f1bcf8fcd97 nginx:1.20.1-alpine],SizeBytes:22650454,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 5 23:28:23.745: INFO: Logging kubelet events for node master1 Nov 5 23:28:23.748: INFO: Logging pods the kubelet thinks is on node master1 Nov 5 23:28:23.773: INFO: kube-scheduler-master1 started at 2021-11-05 21:00:02 +0000 UTC (0+1 container statuses recorded) Nov 5 23:28:23.773: INFO: Container kube-scheduler ready: true, restart count 0 Nov 5 23:28:23.773: INFO: kube-flannel-hkkhj started at 2021-11-05 21:01:36 +0000 UTC (1+1 container statuses recorded) Nov 5 23:28:23.773: INFO: Init container install-cni ready: true, restart count 2 Nov 5 23:28:23.773: INFO: Container kube-flannel ready: true, restart count 2 Nov 5 23:28:23.773: INFO: coredns-8474476ff8-nq2jw started at 2021-11-05 21:02:14 +0000 UTC (0+1 container statuses recorded) Nov 5 23:28:23.773: INFO: Container coredns ready: true, restart count 2 Nov 5 23:28:23.773: INFO: node-exporter-lgdzv started at 2021-11-05 21:14:48 +0000 UTC (0+2 container statuses recorded) Nov 5 23:28:23.773: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 5 23:28:23.773: INFO: Container node-exporter ready: true, restart count 0 Nov 5 23:28:23.773: INFO: kube-apiserver-master1 started at 2021-11-05 21:00:02 +0000 UTC (0+1 container statuses recorded) Nov 5 23:28:23.773: INFO: Container kube-apiserver ready: true, restart count 0 Nov 5 23:28:23.773: INFO: kube-controller-manager-master1 started at 2021-11-05 21:00:02 +0000 UTC (0+1 container statuses recorded) Nov 5 23:28:23.773: INFO: Container kube-controller-manager ready: true, restart count 3 Nov 5 23:28:23.773: INFO: container-registry-65d7c44b96-dwrs5 started at 2021-11-05 21:06:01 +0000 UTC (0+2 container statuses recorded) Nov 5 23:28:23.773: INFO: Container docker-registry ready: true, restart count 0 Nov 5 23:28:23.773: INFO: Container nginx ready: true, restart count 0 Nov 5 23:28:23.773: INFO: kube-proxy-r4cf7 started at 2021-11-05 21:00:42 +0000 UTC (0+1 container statuses recorded) Nov 5 23:28:23.773: INFO: Container kube-proxy ready: true, restart count 1 Nov 5 23:28:23.773: INFO: kube-multus-ds-amd64-rr699 started at 2021-11-05 21:01:44 +0000 UTC (0+1 container statuses recorded) Nov 5 23:28:23.773: INFO: Container kube-multus ready: true, restart count 1 W1105 23:28:23.788914 30 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 5 23:28:23.857: INFO: Latency metrics for node master1 Nov 5 23:28:23.857: INFO: Logging node info for node master2 Nov 5 23:28:23.860: INFO: Node Info: &Node{ObjectMeta:{master2 004d4571-8588-4d18-93d0-ad0af4174866 47176 0 2021-11-05 20:59:23 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-11-05 20:59:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2021-11-05 21:00:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2021-11-05 21:01:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-11-05 21:09:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2021-11-05 21:09:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-05 21:04:41 +0000 UTC,LastTransitionTime:2021-11-05 21:04:41 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-05 23:28:21 +0000 UTC,LastTransitionTime:2021-11-05 20:59:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-05 23:28:21 +0000 UTC,LastTransitionTime:2021-11-05 20:59:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-05 23:28:21 +0000 UTC,LastTransitionTime:2021-11-05 20:59:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-05 23:28:21 +0000 UTC,LastTransitionTime:2021-11-05 21:01:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:0f1bc4b4acc1463992265eb9f006d2f4,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:d0e797a3-7d35-4e63-b584-b18006ef67fe,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724468800,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 5 23:28:23.860: INFO: Logging kubelet events for node master2 Nov 5 23:28:23.862: INFO: Logging pods the kubelet thinks is on node master2 Nov 5 23:28:23.876: INFO: kube-apiserver-master2 started at 2021-11-05 21:00:02 +0000 UTC (0+1 container statuses recorded) Nov 5 23:28:23.876: INFO: Container kube-apiserver ready: true, restart count 0 Nov 5 23:28:23.876: INFO: kube-scheduler-master2 started at 2021-11-05 21:08:18 +0000 UTC (0+1 container statuses recorded) Nov 5 23:28:23.876: INFO: Container kube-scheduler ready: true, restart count 3 Nov 5 23:28:23.876: INFO: kube-multus-ds-amd64-m5646 started at 2021-11-05 21:01:44 +0000 UTC (0+1 container statuses recorded) Nov 5 23:28:23.876: INFO: Container kube-multus ready: true, restart count 1 Nov 5 23:28:23.876: INFO: node-feature-discovery-controller-cff799f9f-8cg9j started at 2021-11-05 21:09:34 +0000 UTC (0+1 container statuses recorded) Nov 5 23:28:23.876: INFO: Container nfd-controller ready: true, restart count 0 Nov 5 23:28:23.876: INFO: node-exporter-8mxjv started at 2021-11-05 21:14:48 +0000 UTC (0+2 container statuses recorded) Nov 5 23:28:23.876: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 5 23:28:23.876: INFO: Container node-exporter ready: true, restart count 0 Nov 5 23:28:23.876: INFO: kube-controller-manager-master2 started at 2021-11-05 21:04:18 +0000 UTC (0+1 container statuses recorded) Nov 5 23:28:23.876: INFO: Container kube-controller-manager ready: true, restart count 2 Nov 5 23:28:23.876: INFO: kube-proxy-9vm9v started at 2021-11-05 21:00:42 +0000 UTC (0+1 container statuses recorded) Nov 5 23:28:23.876: INFO: Container kube-proxy ready: true, restart count 1 Nov 5 23:28:23.876: INFO: kube-flannel-g7q4k started at 2021-11-05 21:01:36 +0000 UTC (1+1 container statuses recorded) Nov 5 23:28:23.876: INFO: Init container install-cni ready: true, restart count 0 Nov 5 23:28:23.876: INFO: Container kube-flannel ready: true, restart count 3 W1105 23:28:23.893934 30 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 5 23:28:23.954: INFO: Latency metrics for node master2 Nov 5 23:28:23.954: INFO: Logging node info for node master3 Nov 5 23:28:23.956: INFO: Node Info: &Node{ObjectMeta:{master3 d3395dfc-1d8f-4527-88b4-7f472f6a6c0f 47141 0 2021-11-05 20:59:34 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-11-05 20:59:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-11-05 21:01:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-11-05 21:01:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-11-05 21:12:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-05 21:04:26 +0000 UTC,LastTransitionTime:2021-11-05 21:04:26 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-05 23:28:16 +0000 UTC,LastTransitionTime:2021-11-05 20:59:34 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-05 23:28:16 +0000 UTC,LastTransitionTime:2021-11-05 20:59:34 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-05 23:28:16 +0000 UTC,LastTransitionTime:2021-11-05 20:59:34 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-05 23:28:16 +0000 UTC,LastTransitionTime:2021-11-05 21:04:19 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:006015d4e2a7441aa293fbb9db938e38,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:a0f65291-184f-4994-a7ea-d1a5b4d71ffa,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724468800,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 5 23:28:23.956: INFO: Logging kubelet events for node master3 Nov 5 23:28:23.959: INFO: Logging pods the kubelet thinks is on node master3 Nov 5 23:28:23.974: INFO: kube-scheduler-master3 started at 2021-11-05 21:08:19 +0000 UTC (0+1 container statuses recorded) Nov 5 23:28:23.974: INFO: Container kube-scheduler ready: true, restart count 3 Nov 5 23:28:23.974: INFO: kube-proxy-s2pzt started at 2021-11-05 21:00:42 +0000 UTC (0+1 container statuses recorded) Nov 5 23:28:23.974: INFO: Container kube-proxy ready: true, restart count 2 Nov 5 23:28:23.974: INFO: kube-multus-ds-amd64-cp25f started at 2021-11-05 21:01:44 +0000 UTC (0+1 container statuses recorded) Nov 5 23:28:23.974: INFO: Container kube-multus ready: true, restart count 1 Nov 5 23:28:23.974: INFO: dns-autoscaler-7df78bfcfb-z9dxm started at 2021-11-05 21:02:12 +0000 UTC (0+1 container statuses recorded) Nov 5 23:28:23.974: INFO: Container autoscaler ready: true, restart count 1 Nov 5 23:28:23.974: INFO: kube-apiserver-master3 started at 2021-11-05 21:04:19 +0000 UTC (0+1 container statuses recorded) Nov 5 23:28:23.974: INFO: Container kube-apiserver ready: true, restart count 0 Nov 5 23:28:23.974: INFO: kube-controller-manager-master3 started at 2021-11-05 21:00:02 +0000 UTC (0+1 container statuses recorded) Nov 5 23:28:23.974: INFO: Container kube-controller-manager ready: true, restart count 2 Nov 5 23:28:23.974: INFO: kube-flannel-f55xz started at 2021-11-05 21:01:36 +0000 UTC (1+1 container statuses recorded) Nov 5 23:28:23.974: INFO: Init container install-cni ready: true, restart count 0 Nov 5 23:28:23.974: INFO: Container kube-flannel ready: true, restart count 1 Nov 5 23:28:23.974: INFO: coredns-8474476ff8-qbn9j started at 2021-11-05 21:02:10 +0000 UTC (0+1 container statuses recorded) Nov 5 23:28:23.974: INFO: Container coredns ready: true, restart count 1 Nov 5 23:28:23.974: INFO: node-exporter-mqcvx started at 2021-11-05 21:14:48 +0000 UTC (0+2 container statuses recorded) Nov 5 23:28:23.974: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 5 23:28:23.974: INFO: Container node-exporter ready: true, restart count 0 W1105 23:28:23.992569 30 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 5 23:28:24.072: INFO: Latency metrics for node master3 Nov 5 23:28:24.072: INFO: Logging node info for node node1 Nov 5 23:28:24.077: INFO: Node Info: &Node{ObjectMeta:{node1 290b18e7-da33-4da8-b78a-8a7f28c49abf 47138 0 2021-11-05 21:00:39 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-11-05 21:00:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-11-05 21:00:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-11-05 21:01:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-11-05 21:09:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-11-05 21:13:07 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-11-05 21:13:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-05 21:04:40 +0000 UTC,LastTransitionTime:2021-11-05 21:04:40 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-05 23:28:15 +0000 UTC,LastTransitionTime:2021-11-05 21:00:39 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-05 23:28:15 +0000 UTC,LastTransitionTime:2021-11-05 21:00:39 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-05 23:28:15 +0000 UTC,LastTransitionTime:2021-11-05 21:00:39 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-05 23:28:15 +0000 UTC,LastTransitionTime:2021-11-05 21:01:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7f2fc144f1734ec29780a435d0602675,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:7c24c54c-15ba-4c20-b196-32ad0c82be71,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003432896,},ContainerImage{Names:[localhost:30500/cmk@sha256:8c85a99f0d621f4580064e48909402b99a2e98aff1c1e4164e4e52d46568c5be cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724468800,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:e30d246f33998514fb41fc34b11174e739efb54af715b53723b81cc5d2ebd29d nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[localhost:30500/tasextender@sha256:50c0d65dc6b2d7618e01dd2fb431c2312ad7490d0461d040b112b04809b07401 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:a296b4c6f6ee2b88f095b61e95c7ef4f51ba25598835b4978c9256d8c8ace48a alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 5 23:28:24.078: INFO: Logging kubelet events for node node1 Nov 5 23:28:24.081: INFO: Logging pods the kubelet thinks is on node node1 Nov 5 23:28:24.097: INFO: affinity-nodeport-timeout-pz5p8 started at 2021-11-05 23:27:10 +0000 UTC (0+1 container statuses recorded) Nov 5 23:28:24.097: INFO: Container affinity-nodeport-timeout ready: true, restart count 0 Nov 5 23:28:24.097: INFO: test-pod started at 2021-11-05 23:23:17 +0000 UTC (0+1 container statuses recorded) Nov 5 23:28:24.097: INFO: Container webserver ready: true, restart count 0 Nov 5 23:28:24.097: INFO: test-webserver-e71c8083-eaeb-4cb0-956a-7b0efb4178ab started at 2021-11-05 23:27:29 +0000 UTC (0+1 container statuses recorded) Nov 5 23:28:24.097: INFO: Container test-webserver ready: true, restart count 0 Nov 5 23:28:24.097: INFO: ss2-2 started at 2021-11-05 23:28:13 +0000 UTC (0+1 container statuses recorded) Nov 5 23:28:24.097: INFO: Container webserver ready: true, restart count 0 Nov 5 23:28:24.097: INFO: execpod-affinitywtzlg started at 2021-11-05 23:28:14 +0000 UTC (0+1 container statuses recorded) Nov 5 23:28:24.097: INFO: Container agnhost-container ready: true, restart count 0 Nov 5 23:28:24.097: INFO: kube-flannel-hxwks started at 2021-11-05 21:01:36 +0000 UTC (1+1 container statuses recorded) Nov 5 23:28:24.097: INFO: Init container install-cni ready: true, restart count 2 Nov 5 23:28:24.097: INFO: Container kube-flannel ready: true, restart count 3 Nov 5 23:28:24.097: INFO: affinity-nodeport-spkgv started at 2021-11-05 23:26:46 +0000 UTC (0+1 container statuses recorded) Nov 5 23:28:24.097: INFO: Container affinity-nodeport ready: true, restart count 0 Nov 5 23:28:24.097: INFO: kubernetes-dashboard-785dcbb76d-9wtdz started at 2021-11-05 21:02:14 +0000 UTC (0+1 container statuses recorded) Nov 5 23:28:24.097: INFO: Container kubernetes-dashboard ready: true, restart count 1 Nov 5 23:28:24.097: INFO: cmk-init-discover-node1-nnkks started at 2021-11-05 21:13:04 +0000 UTC (0+3 container statuses recorded) Nov 5 23:28:24.097: INFO: Container discover ready: false, restart count 0 Nov 5 23:28:24.097: INFO: Container init ready: false, restart count 0 Nov 5 23:28:24.097: INFO: Container install ready: false, restart count 0 Nov 5 23:28:24.097: INFO: cmk-webhook-6c9d5f8578-wq5mk started at 2021-11-05 21:13:47 +0000 UTC (0+1 container statuses recorded) Nov 5 23:28:24.097: INFO: Container cmk-webhook ready: true, restart count 0 Nov 5 23:28:24.097: INFO: node-exporter-fvksz started at 2021-11-05 21:14:48 +0000 UTC (0+2 container statuses recorded) Nov 5 23:28:24.097: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 5 23:28:24.097: INFO: Container node-exporter ready: true, restart count 0 Nov 5 23:28:24.097: INFO: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s started at 2021-11-05 21:17:51 +0000 UTC (0+1 container statuses recorded) Nov 5 23:28:24.097: INFO: Container tas-extender ready: true, restart count 0 Nov 5 23:28:24.097: INFO: collectd-5k6s9 started at 2021-11-05 21:18:40 +0000 UTC (0+3 container statuses recorded) Nov 5 23:28:24.097: INFO: Container collectd ready: true, restart count 0 Nov 5 23:28:24.097: INFO: Container collectd-exporter ready: true, restart count 0 Nov 5 23:28:24.097: INFO: Container rbac-proxy ready: true, restart count 0 Nov 5 23:28:24.097: INFO: nginx-proxy-node1 started at 2021-11-05 21:00:39 +0000 UTC (0+1 container statuses recorded) Nov 5 23:28:24.097: INFO: Container nginx-proxy ready: true, restart count 2 Nov 5 23:28:24.097: INFO: kube-multus-ds-amd64-mqrl8 started at 2021-11-05 21:01:44 +0000 UTC (0+1 container statuses recorded) Nov 5 23:28:24.097: INFO: Container kube-multus ready: true, restart count 1 Nov 5 23:28:24.097: INFO: affinity-nodeport-transition-wrj2s started at 2021-11-05 23:28:08 +0000 UTC (0+1 container statuses recorded) Nov 5 23:28:24.097: INFO: Container affinity-nodeport-transition ready: true, restart count 0 Nov 5 23:28:24.097: INFO: pod-configmaps-956bdc66-7726-4f03-8824-164451762428 started at 2021-11-05 23:27:44 +0000 UTC (0+1 container statuses recorded) Nov 5 23:28:24.097: INFO: Container agnhost-container ready: true, restart count 0 Nov 5 23:28:24.097: INFO: node-feature-discovery-worker-spmbf started at 2021-11-05 21:09:34 +0000 UTC (0+1 container statuses recorded) Nov 5 23:28:24.097: INFO: Container nfd-worker ready: true, restart count 0 Nov 5 23:28:24.097: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-l4npn started at 2021-11-05 21:10:45 +0000 UTC (0+1 container statuses recorded) Nov 5 23:28:24.097: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 5 23:28:24.097: INFO: prometheus-k8s-0 started at 2021-11-05 21:14:58 +0000 UTC (0+4 container statuses recorded) Nov 5 23:28:24.097: INFO: Container config-reloader ready: true, restart count 0 Nov 5 23:28:24.097: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Nov 5 23:28:24.097: INFO: Container grafana ready: true, restart count 0 Nov 5 23:28:24.097: INFO: Container prometheus ready: true, restart count 1 Nov 5 23:28:24.097: INFO: kube-proxy-mc4cs started at 2021-11-05 21:00:42 +0000 UTC (0+1 container statuses recorded) Nov 5 23:28:24.097: INFO: Container kube-proxy ready: true, restart count 2 Nov 5 23:28:24.097: INFO: cmk-cfm9r started at 2021-11-05 21:13:47 +0000 UTC (0+2 container statuses recorded) Nov 5 23:28:24.097: INFO: Container nodereport ready: true, restart count 0 Nov 5 23:28:24.097: INFO: Container reconcile ready: true, restart count 0 W1105 23:28:24.110913 30 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 5 23:28:24.300: INFO: Latency metrics for node node1 Nov 5 23:28:24.300: INFO: Logging node info for node node2 Nov 5 23:28:24.303: INFO: Node Info: &Node{ObjectMeta:{node2 7d7e71f0-82d7-49ba-b69a-56600dd59b3f 47106 0 2021-11-05 21:00:39 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-11-05 21:00:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-11-05 21:00:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-11-05 21:01:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-11-05 21:09:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-11-05 21:13:29 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-11-05 21:13:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-05 21:04:43 +0000 UTC,LastTransitionTime:2021-11-05 21:04:43 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-05 23:28:14 +0000 UTC,LastTransitionTime:2021-11-05 21:00:39 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-05 23:28:14 +0000 UTC,LastTransitionTime:2021-11-05 21:00:39 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-05 23:28:14 +0000 UTC,LastTransitionTime:2021-11-05 21:00:39 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-05 23:28:14 +0000 UTC,LastTransitionTime:2021-11-05 21:01:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:415d65c0f8484c488059b324e675b5bd,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:c5482a76-3a9a-45bb-ab12-c74550bfe71f,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:724468800,},ContainerImage{Names:[localhost:30500/cmk@sha256:8c85a99f0d621f4580064e48909402b99a2e98aff1c1e4164e4e52d46568c5be localhost:30500/cmk:v1.5.1],SizeBytes:724468800,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:e30d246f33998514fb41fc34b11174e739efb54af715b53723b81cc5d2ebd29d localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 5 23:28:24.304: INFO: Logging kubelet events for node node2 Nov 5 23:28:24.306: INFO: Logging pods the kubelet thinks is on node node2 Nov 5 23:28:24.322: INFO: kube-multus-ds-amd64-p7bxx started at 2021-11-05 21:01:44 +0000 UTC (0+1 container statuses recorded) Nov 5 23:28:24.322: INFO: Container kube-multus ready: true, restart count 1 Nov 5 23:28:24.322: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-tzh4p started at 2021-11-05 21:10:45 +0000 UTC (0+1 container statuses recorded) Nov 5 23:28:24.322: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 5 23:28:24.322: INFO: collectd-r2g57 started at 2021-11-05 21:18:40 +0000 UTC (0+3 container statuses recorded) Nov 5 23:28:24.322: INFO: Container collectd ready: true, restart count 0 Nov 5 23:28:24.322: INFO: Container collectd-exporter ready: true, restart count 0 Nov 5 23:28:24.322: INFO: Container rbac-proxy ready: true, restart count 0 Nov 5 23:28:24.322: INFO: ss2-1 started at 2021-11-05 23:28:10 +0000 UTC (0+1 container statuses recorded) Nov 5 23:28:24.322: INFO: Container webserver ready: true, restart count 0 Nov 5 23:28:24.322: INFO: ss2-0 started at 2021-11-05 23:28:06 +0000 UTC (0+1 container statuses recorded) Nov 5 23:28:24.322: INFO: Container webserver ready: true, restart count 0 Nov 5 23:28:24.322: INFO: cmk-bnvd2 started at 2021-11-05 21:13:46 +0000 UTC (0+2 container statuses recorded) Nov 5 23:28:24.322: INFO: Container nodereport ready: true, restart count 0 Nov 5 23:28:24.322: INFO: Container reconcile ready: true, restart count 0 Nov 5 23:28:24.322: INFO: prometheus-operator-585ccfb458-vh55q started at 2021-11-05 21:14:41 +0000 UTC (0+2 container statuses recorded) Nov 5 23:28:24.322: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 5 23:28:24.322: INFO: Container prometheus-operator ready: true, restart count 0 Nov 5 23:28:24.322: INFO: affinity-nodeport-timeout-vjfr2 started at 2021-11-05 23:27:10 +0000 UTC (0+1 container statuses recorded) Nov 5 23:28:24.322: INFO: Container affinity-nodeport-timeout ready: true, restart count 0 Nov 5 23:28:24.322: INFO: simpletest.deployment-9858f564d-cw6cq started at 2021-11-05 23:27:21 +0000 UTC (0+1 container statuses recorded) Nov 5 23:28:24.322: INFO: Container nginx ready: true, restart count 0 Nov 5 23:28:24.322: INFO: nginx-proxy-node2 started at 2021-11-05 21:00:39 +0000 UTC (0+1 container statuses recorded) Nov 5 23:28:24.322: INFO: Container nginx-proxy ready: true, restart count 2 Nov 5 23:28:24.322: INFO: node-feature-discovery-worker-pn6cr started at 2021-11-05 21:09:34 +0000 UTC (0+1 container statuses recorded) Nov 5 23:28:24.322: INFO: Container nfd-worker ready: true, restart count 0 Nov 5 23:28:24.322: INFO: execpod-affinityxz4jp started at 2021-11-05 23:27:16 +0000 UTC (0+1 container statuses recorded) Nov 5 23:28:24.322: INFO: Container agnhost-container ready: true, restart count 0 Nov 5 23:28:24.322: INFO: cmk-init-discover-node2-9svdd started at 2021-11-05 21:13:24 +0000 UTC (0+3 container statuses recorded) Nov 5 23:28:24.322: INFO: Container discover ready: false, restart count 0 Nov 5 23:28:24.322: INFO: Container init ready: false, restart count 0 Nov 5 23:28:24.322: INFO: Container install ready: false, restart count 0 Nov 5 23:28:24.322: INFO: node-exporter-k7p79 started at 2021-11-05 21:14:48 +0000 UTC (0+2 container statuses recorded) Nov 5 23:28:24.322: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 5 23:28:24.322: INFO: Container node-exporter ready: true, restart count 0 Nov 5 23:28:24.322: INFO: kubernetes-metrics-scraper-5558854cb-v9vgg started at 2021-11-05 21:02:14 +0000 UTC (0+1 container statuses recorded) Nov 5 23:28:24.322: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Nov 5 23:28:24.322: INFO: affinity-nodeport-transition-dbbbd started at 2021-11-05 23:28:08 +0000 UTC (0+1 container statuses recorded) Nov 5 23:28:24.322: INFO: Container affinity-nodeport-transition ready: true, restart count 0 Nov 5 23:28:24.322: INFO: affinity-nodeport-timeout-qsgkj started at 2021-11-05 23:27:10 +0000 UTC (0+1 container statuses recorded) Nov 5 23:28:24.322: INFO: Container affinity-nodeport-timeout ready: true, restart count 0 Nov 5 23:28:24.322: INFO: kube-flannel-cqj7j started at 2021-11-05 21:01:36 +0000 UTC (1+1 container statuses recorded) Nov 5 23:28:24.322: INFO: Init container install-cni ready: true, restart count 1 Nov 5 23:28:24.322: INFO: Container kube-flannel ready: true, restart count 2 Nov 5 23:28:24.322: INFO: affinity-nodeport-transition-c9bcs started at 2021-11-05 23:28:08 +0000 UTC (0+1 container statuses recorded) Nov 5 23:28:24.322: INFO: Container affinity-nodeport-transition ready: true, restart count 0 Nov 5 23:28:24.322: INFO: kube-proxy-j9lmg started at 2021-11-05 21:00:42 +0000 UTC (0+1 container statuses recorded) Nov 5 23:28:24.322: INFO: Container kube-proxy ready: true, restart count 2 Nov 5 23:28:24.322: INFO: simpletest.deployment-9858f564d-xghd6 started at 2021-11-05 23:27:21 +0000 UTC (0+1 container statuses recorded) Nov 5 23:28:24.322: INFO: Container nginx ready: true, restart count 0 Nov 5 23:28:24.322: INFO: affinity-nodeport-5rvwl started at 2021-11-05 23:26:46 +0000 UTC (0+1 container statuses recorded) Nov 5 23:28:24.322: INFO: Container affinity-nodeport ready: true, restart count 0 Nov 5 23:28:24.322: INFO: affinity-nodeport-fggqn started at 2021-11-05 23:26:46 +0000 UTC (0+1 container statuses recorded) Nov 5 23:28:24.322: INFO: Container affinity-nodeport ready: true, restart count 0 Nov 5 23:28:24.322: INFO: execpod-affinityhm8jb started at 2021-11-05 23:26:55 +0000 UTC (0+1 container statuses recorded) Nov 5 23:28:24.322: INFO: Container agnhost-container ready: true, restart count 0 Nov 5 23:28:24.322: INFO: replace-27269247-ttmv7 started at 2021-11-05 23:27:00 +0000 UTC (0+1 container statuses recorded) Nov 5 23:28:24.322: INFO: Container c ready: true, restart count 0 W1105 23:28:24.342750 30 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 5 23:28:24.625: INFO: Latency metrics for node node2 Nov 5 23:28:24.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7258" for this suite. • Failure [307.312 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 Should recreate evicted statefulset [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 5 23:28:23.371: Pod ss-0 expected to be re-created at least once /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 ------------------------------ {"msg":"FAILED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":-1,"completed":13,"skipped":219,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:28:24.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should get a host IP [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating pod Nov 5 23:28:24.652: INFO: The status of Pod pod-hostip-dddf01a2-0aa2-48c0-acea-9634ae5048f5 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:28:26.657: INFO: The status of Pod pod-hostip-dddf01a2-0aa2-48c0-acea-9634ae5048f5 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:28:28.655: INFO: The status of Pod pod-hostip-dddf01a2-0aa2-48c0-acea-9634ae5048f5 is Running (Ready = true) Nov 5 23:28:28.660: INFO: Pod pod-hostip-dddf01a2-0aa2-48c0-acea-9634ae5048f5 has hostIP: 10.10.190.208 [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:28:28.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1072" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Pods should get a host IP [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":259,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:27:29.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Nov 5 23:27:29.502: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2535 5cdf9f53-ace9-4980-8547-bec5d42607c4 46275 0 2021-11-05 23:27:29 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-11-05 23:27:29 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Nov 5 23:27:29.503: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2535 5cdf9f53-ace9-4980-8547-bec5d42607c4 46275 0 2021-11-05 23:27:29 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-11-05 23:27:29 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Nov 5 23:27:39.509: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2535 5cdf9f53-ace9-4980-8547-bec5d42607c4 46451 0 2021-11-05 23:27:29 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-11-05 23:27:39 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Nov 5 23:27:39.510: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2535 5cdf9f53-ace9-4980-8547-bec5d42607c4 46451 0 2021-11-05 23:27:29 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-11-05 23:27:39 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Nov 5 23:27:49.517: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2535 5cdf9f53-ace9-4980-8547-bec5d42607c4 46539 0 2021-11-05 23:27:29 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-11-05 23:27:39 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Nov 5 23:27:49.518: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2535 5cdf9f53-ace9-4980-8547-bec5d42607c4 46539 0 2021-11-05 23:27:29 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-11-05 23:27:39 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Nov 5 23:27:59.523: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2535 5cdf9f53-ace9-4980-8547-bec5d42607c4 46707 0 2021-11-05 23:27:29 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-11-05 23:27:39 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Nov 5 23:27:59.523: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2535 5cdf9f53-ace9-4980-8547-bec5d42607c4 46707 0 2021-11-05 23:27:29 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-11-05 23:27:39 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Nov 5 23:28:09.528: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2535 78bac875-6e6c-460f-81f5-4cb073c9cd10 46970 0 2021-11-05 23:28:09 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-11-05 23:28:09 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Nov 5 23:28:09.528: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2535 78bac875-6e6c-460f-81f5-4cb073c9cd10 46970 0 2021-11-05 23:28:09 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-11-05 23:28:09 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Nov 5 23:28:19.534: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2535 78bac875-6e6c-460f-81f5-4cb073c9cd10 47167 0 2021-11-05 23:28:09 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-11-05 23:28:09 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Nov 5 23:28:19.534: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2535 78bac875-6e6c-460f-81f5-4cb073c9cd10 47167 0 2021-11-05 23:28:09 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-11-05 23:28:09 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:28:29.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2535" for this suite. • [SLOW TEST:60.073 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":-1,"completed":11,"skipped":134,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:28:28.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: submitting the pod to kubernetes Nov 5 23:28:28.753: INFO: The status of Pod pod-update-69ae98f0-2dc9-4b95-81f9-f8ffcd697458 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:28:30.756: INFO: The status of Pod pod-update-69ae98f0-2dc9-4b95-81f9-f8ffcd697458 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:28:32.757: INFO: The status of Pod pod-update-69ae98f0-2dc9-4b95-81f9-f8ffcd697458 is Running (Ready = true) STEP: verifying the pod is in kubernetes STEP: updating the pod Nov 5 23:28:33.272: INFO: Successfully updated pod "pod-update-69ae98f0-2dc9-4b95-81f9-f8ffcd697458" STEP: verifying the updated pod is in kubernetes Nov 5 23:28:33.276: INFO: Pod update OK [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:28:33.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3708" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Pods should be updated [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":281,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:28:10.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: set up a multi version CRD Nov 5 23:28:10.393: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:28:35.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1003" for this suite. • [SLOW TEST:24.667 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":-1,"completed":26,"skipped":355,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:28:24.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. Nov 5 23:28:24.720: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:28:26.724: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:28:28.724: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the pod with lifecycle hook Nov 5 23:28:28.741: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:28:30.744: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:28:32.744: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:28:34.744: INFO: The status of Pod pod-with-poststart-http-hook is Running (Ready = true) STEP: check poststart hook STEP: delete the pod with lifecycle hook Nov 5 23:28:34.758: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Nov 5 23:28:34.760: INFO: Pod pod-with-poststart-http-hook still exists Nov 5 23:28:36.762: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Nov 5 23:28:36.764: INFO: Pod pod-with-poststart-http-hook still exists Nov 5 23:28:38.761: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Nov 5 23:28:38.764: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:28:38.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5262" for this suite. • [SLOW TEST:14.089 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":250,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:28:33.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 5 23:28:33.317: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes Nov 5 23:28:33.334: INFO: The status of Pod pod-exec-websocket-5ce55813-b245-41f5-880a-3bc2f38f53b0 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:28:35.338: INFO: The status of Pod pod-exec-websocket-5ce55813-b245-41f5-880a-3bc2f38f53b0 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:28:37.337: INFO: The status of Pod pod-exec-websocket-5ce55813-b245-41f5-880a-3bc2f38f53b0 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:28:39.338: INFO: The status of Pod pod-exec-websocket-5ce55813-b245-41f5-880a-3bc2f38f53b0 is Running (Ready = true) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:28:39.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9960" for this suite. • [SLOW TEST:6.184 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":286,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:28:39.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod with the kernel.shm_rmid_forced sysctl STEP: Watching for error events or started pod STEP: Waiting for pod completion STEP: Checking that the pod succeeded STEP: Getting logs from the pod STEP: Checking that the sysctl is actually updated [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:28:43.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-7395" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":15,"skipped":294,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:28:43.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be immutable if `immutable` field is set [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:28:43.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2264" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":16,"skipped":296,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:28:43.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should run through the lifecycle of a ServiceAccount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a ServiceAccount STEP: watching for the ServiceAccount to be added STEP: patching the ServiceAccount STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) STEP: deleting the ServiceAccount [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:28:43.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-4221" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":-1,"completed":17,"skipped":307,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:28:43.712: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Nov 5 23:28:43.742: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a90f3c19-65d5-4c40-999b-4490f6f313f7" in namespace "downward-api-7968" to be "Succeeded or Failed" Nov 5 23:28:43.748: INFO: Pod "downwardapi-volume-a90f3c19-65d5-4c40-999b-4490f6f313f7": Phase="Pending", Reason="", readiness=false. Elapsed: 5.764946ms Nov 5 23:28:45.752: INFO: Pod "downwardapi-volume-a90f3c19-65d5-4c40-999b-4490f6f313f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009244225s Nov 5 23:28:47.756: INFO: Pod "downwardapi-volume-a90f3c19-65d5-4c40-999b-4490f6f313f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013601319s STEP: Saw pod success Nov 5 23:28:47.756: INFO: Pod "downwardapi-volume-a90f3c19-65d5-4c40-999b-4490f6f313f7" satisfied condition "Succeeded or Failed" Nov 5 23:28:47.759: INFO: Trying to get logs from node node1 pod downwardapi-volume-a90f3c19-65d5-4c40-999b-4490f6f313f7 container client-container: STEP: delete the pod Nov 5 23:28:47.781: INFO: Waiting for pod downwardapi-volume-a90f3c19-65d5-4c40-999b-4490f6f313f7 to disappear Nov 5 23:28:47.783: INFO: Pod downwardapi-volume-a90f3c19-65d5-4c40-999b-4490f6f313f7 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:28:47.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7968" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":315,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:28:47.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a test event STEP: listing all events in all namespaces STEP: patching the test event STEP: fetching the test event STEP: deleting the test event STEP: listing all events in all namespaces [AfterEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:28:47.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-5880" for this suite. • ------------------------------ {"msg":"PASSED [sig-instrumentation] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":19,"skipped":323,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:28:38.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a service externalname-service with the type=ExternalName in namespace services-2971 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-2971 I1105 23:28:38.823265 30 runners.go:190] Created replication controller with name: externalname-service, namespace: services-2971, replica count: 2 I1105 23:28:41.875190 30 runners.go:190] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1105 23:28:44.875335 30 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 5 23:28:44.875: INFO: Creating new exec pod Nov 5 23:28:49.914: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2971 exec execpodvt7fm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 5 23:28:50.135: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 5 23:28:50.135: INFO: stdout: "externalname-service-7bvjk" Nov 5 23:28:50.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2971 exec execpodvt7fm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.50.204 80' Nov 5 23:28:50.374: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.50.204 80\nConnection to 10.233.50.204 80 port [tcp/http] succeeded!\n" Nov 5 23:28:50.374: INFO: stdout: "externalname-service-7bvjk" Nov 5 23:28:50.374: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:28:50.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2971" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:11.613 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":-1,"completed":15,"skipped":252,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:28:47.907: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0644 on tmpfs Nov 5 23:28:47.940: INFO: Waiting up to 5m0s for pod "pod-fca492f4-9f59-4232-b141-bd8d9a1d6d13" in namespace "emptydir-3201" to be "Succeeded or Failed" Nov 5 23:28:47.943: INFO: Pod "pod-fca492f4-9f59-4232-b141-bd8d9a1d6d13": Phase="Pending", Reason="", readiness=false. Elapsed: 3.770059ms Nov 5 23:28:49.946: INFO: Pod "pod-fca492f4-9f59-4232-b141-bd8d9a1d6d13": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006813843s Nov 5 23:28:51.950: INFO: Pod "pod-fca492f4-9f59-4232-b141-bd8d9a1d6d13": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010359964s STEP: Saw pod success Nov 5 23:28:51.950: INFO: Pod "pod-fca492f4-9f59-4232-b141-bd8d9a1d6d13" satisfied condition "Succeeded or Failed" Nov 5 23:28:51.953: INFO: Trying to get logs from node node1 pod pod-fca492f4-9f59-4232-b141-bd8d9a1d6d13 container test-container: STEP: delete the pod Nov 5 23:28:52.078: INFO: Waiting for pod pod-fca492f4-9f59-4232-b141-bd8d9a1d6d13 to disappear Nov 5 23:28:52.080: INFO: Pod pod-fca492f4-9f59-4232-b141-bd8d9a1d6d13 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:28:52.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3201" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":337,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:28:52.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Nov 5 23:28:52.139: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c6dc47a9-f709-4e57-8f35-87b2ad77d581" in namespace "projected-743" to be "Succeeded or Failed" Nov 5 23:28:52.144: INFO: Pod "downwardapi-volume-c6dc47a9-f709-4e57-8f35-87b2ad77d581": Phase="Pending", Reason="", readiness=false. Elapsed: 4.635211ms Nov 5 23:28:54.149: INFO: Pod "downwardapi-volume-c6dc47a9-f709-4e57-8f35-87b2ad77d581": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009796459s Nov 5 23:28:56.155: INFO: Pod "downwardapi-volume-c6dc47a9-f709-4e57-8f35-87b2ad77d581": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015381757s STEP: Saw pod success Nov 5 23:28:56.155: INFO: Pod "downwardapi-volume-c6dc47a9-f709-4e57-8f35-87b2ad77d581" satisfied condition "Succeeded or Failed" Nov 5 23:28:56.157: INFO: Trying to get logs from node node2 pod downwardapi-volume-c6dc47a9-f709-4e57-8f35-87b2ad77d581 container client-container: STEP: delete the pod Nov 5 23:28:56.169: INFO: Waiting for pod downwardapi-volume-c6dc47a9-f709-4e57-8f35-87b2ad77d581 to disappear Nov 5 23:28:56.173: INFO: Pod downwardapi-volume-c6dc47a9-f709-4e57-8f35-87b2ad77d581 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:28:56.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-743" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":343,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:28:50.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name cm-test-opt-del-3bafe382-8bb6-434f-b8a5-2263417cb23c STEP: Creating configMap with name cm-test-opt-upd-7ce8b12b-24cc-4e9a-836d-6d33dd7bf103 STEP: Creating the pod Nov 5 23:28:50.462: INFO: The status of Pod pod-configmaps-126dc07f-7cfc-474d-a3cd-13a124096bac is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:28:52.465: INFO: The status of Pod pod-configmaps-126dc07f-7cfc-474d-a3cd-13a124096bac is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:28:54.466: INFO: The status of Pod pod-configmaps-126dc07f-7cfc-474d-a3cd-13a124096bac is Running (Ready = true) STEP: Deleting configmap cm-test-opt-del-3bafe382-8bb6-434f-b8a5-2263417cb23c STEP: Updating configmap cm-test-opt-upd-7ce8b12b-24cc-4e9a-836d-6d33dd7bf103 STEP: Creating configMap with name cm-test-opt-create-83fdef54-8c29-40b2-9315-ecf2484c4b48 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:28:58.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4731" for this suite. • [SLOW TEST:8.289 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":260,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:28:56.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Nov 5 23:28:56.225: INFO: Waiting up to 5m0s for pod "downwardapi-volume-042bba22-4f06-4675-9b49-7d2e0a70e8e6" in namespace "downward-api-1986" to be "Succeeded or Failed" Nov 5 23:28:56.232: INFO: Pod "downwardapi-volume-042bba22-4f06-4675-9b49-7d2e0a70e8e6": Phase="Pending", Reason="", readiness=false. Elapsed: 7.113077ms Nov 5 23:28:58.235: INFO: Pod "downwardapi-volume-042bba22-4f06-4675-9b49-7d2e0a70e8e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010307195s Nov 5 23:29:00.239: INFO: Pod "downwardapi-volume-042bba22-4f06-4675-9b49-7d2e0a70e8e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01373372s STEP: Saw pod success Nov 5 23:29:00.239: INFO: Pod "downwardapi-volume-042bba22-4f06-4675-9b49-7d2e0a70e8e6" satisfied condition "Succeeded or Failed" Nov 5 23:29:00.241: INFO: Trying to get logs from node node2 pod downwardapi-volume-042bba22-4f06-4675-9b49-7d2e0a70e8e6 container client-container: STEP: delete the pod Nov 5 23:29:00.256: INFO: Waiting for pod downwardapi-volume-042bba22-4f06-4675-9b49-7d2e0a70e8e6 to disappear Nov 5 23:29:00.258: INFO: Pod downwardapi-volume-042bba22-4f06-4675-9b49-7d2e0a70e8e6 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:29:00.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1986" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":345,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:27:44.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-upd-48e65b53-9ccb-4824-b5b5-b8c555028f3c STEP: Creating the pod Nov 5 23:27:44.142: INFO: The status of Pod pod-configmaps-956bdc66-7726-4f03-8824-164451762428 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:27:46.146: INFO: The status of Pod pod-configmaps-956bdc66-7726-4f03-8824-164451762428 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:27:48.147: INFO: The status of Pod pod-configmaps-956bdc66-7726-4f03-8824-164451762428 is Running (Ready = true) STEP: Updating configmap configmap-test-upd-48e65b53-9ccb-4824-b5b5-b8c555028f3c STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:29:01.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2427" for this suite. • [SLOW TEST:77.203 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":367,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:28:58.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0644 on node default medium Nov 5 23:28:58.748: INFO: Waiting up to 5m0s for pod "pod-acbdbfd7-4675-4f2d-83c1-8b354147bd3d" in namespace "emptydir-9383" to be "Succeeded or Failed" Nov 5 23:28:58.754: INFO: Pod "pod-acbdbfd7-4675-4f2d-83c1-8b354147bd3d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.120104ms Nov 5 23:29:00.758: INFO: Pod "pod-acbdbfd7-4675-4f2d-83c1-8b354147bd3d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009373987s Nov 5 23:29:02.762: INFO: Pod "pod-acbdbfd7-4675-4f2d-83c1-8b354147bd3d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013465248s STEP: Saw pod success Nov 5 23:29:02.762: INFO: Pod "pod-acbdbfd7-4675-4f2d-83c1-8b354147bd3d" satisfied condition "Succeeded or Failed" Nov 5 23:29:02.764: INFO: Trying to get logs from node node1 pod pod-acbdbfd7-4675-4f2d-83c1-8b354147bd3d container test-container: STEP: delete the pod Nov 5 23:29:02.780: INFO: Waiting for pod pod-acbdbfd7-4675-4f2d-83c1-8b354147bd3d to disappear Nov 5 23:29:02.781: INFO: Pod pod-acbdbfd7-4675-4f2d-83c1-8b354147bd3d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:29:02.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9383" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":267,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:29:02.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Nov 5 23:29:02.852: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0165a536-39db-4621-93b9-0a7dc8169306" in namespace "projected-2344" to be "Succeeded or Failed" Nov 5 23:29:02.855: INFO: Pod "downwardapi-volume-0165a536-39db-4621-93b9-0a7dc8169306": Phase="Pending", Reason="", readiness=false. Elapsed: 3.309589ms Nov 5 23:29:04.857: INFO: Pod "downwardapi-volume-0165a536-39db-4621-93b9-0a7dc8169306": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005750168s Nov 5 23:29:06.862: INFO: Pod "downwardapi-volume-0165a536-39db-4621-93b9-0a7dc8169306": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010642321s STEP: Saw pod success Nov 5 23:29:06.862: INFO: Pod "downwardapi-volume-0165a536-39db-4621-93b9-0a7dc8169306" satisfied condition "Succeeded or Failed" Nov 5 23:29:06.864: INFO: Trying to get logs from node node2 pod downwardapi-volume-0165a536-39db-4621-93b9-0a7dc8169306 container client-container: STEP: delete the pod Nov 5 23:29:06.970: INFO: Waiting for pod downwardapi-volume-0165a536-39db-4621-93b9-0a7dc8169306 to disappear Nov 5 23:29:06.972: INFO: Pod downwardapi-volume-0165a536-39db-4621-93b9-0a7dc8169306 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:29:06.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2344" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":281,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:29:01.278: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Nov 5 23:29:01.860: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Nov 5 23:29:03.868: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751741, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751741, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751741, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751741, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 5 23:29:06.877: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:29:07.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7088" for this suite. STEP: Destroying namespace "webhook-7088-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.660 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":-1,"completed":26,"skipped":369,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:26:46.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-3510 STEP: creating service affinity-nodeport in namespace services-3510 STEP: creating replication controller affinity-nodeport in namespace services-3510 I1105 23:26:46.632775 26 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-3510, replica count: 3 I1105 23:26:49.684275 26 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1105 23:26:52.684517 26 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1105 23:26:55.685684 26 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 5 23:26:55.695: INFO: Creating new exec pod Nov 5 23:27:02.715: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80' Nov 5 23:27:02.969: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport 80\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\n" Nov 5 23:27:02.969: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Nov 5 23:27:02.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.12.44 80' Nov 5 23:27:03.288: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.12.44 80\nConnection to 10.233.12.44 80 port [tcp/http] succeeded!\n" Nov 5 23:27:03.288: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Nov 5 23:27:03.288: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:27:03.778: INFO: rc: 1 Nov 5 23:27:03.778: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:04.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:27:05.070: INFO: rc: 1 Nov 5 23:27:05.070: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:05.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:27:06.053: INFO: rc: 1 Nov 5 23:27:06.053: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:06.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:27:07.337: INFO: rc: 1 Nov 5 23:27:07.337: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:07.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:27:08.088: INFO: rc: 1 Nov 5 23:27:08.089: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:08.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:27:09.040: INFO: rc: 1 Nov 5 23:27:09.040: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:09.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:27:10.014: INFO: rc: 1 Nov 5 23:27:10.014: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31054 + echo hostName nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:10.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:27:11.188: INFO: rc: 1 Nov 5 23:27:11.188: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:11.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:27:12.378: INFO: rc: 1 Nov 5 23:27:12.378: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:12.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:27:13.032: INFO: rc: 1 Nov 5 23:27:13.032: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:13.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:27:14.035: INFO: rc: 1 Nov 5 23:27:14.035: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:14.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:27:15.095: INFO: rc: 1 Nov 5 23:27:15.095: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:15.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:27:16.024: INFO: rc: 1 Nov 5 23:27:16.024: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:16.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:27:17.081: INFO: rc: 1 Nov 5 23:27:17.081: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:17.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:27:18.024: INFO: rc: 1 Nov 5 23:27:18.024: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:18.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:27:19.051: INFO: rc: 1 Nov 5 23:27:19.051: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:19.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:27:20.211: INFO: rc: 1 Nov 5 23:27:20.211: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:20.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:27:21.060: INFO: rc: 1 Nov 5 23:27:21.060: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:21.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:27:22.045: INFO: rc: 1 Nov 5 23:27:22.045: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:22.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:27:23.327: INFO: rc: 1 Nov 5 23:27:23.327: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:23.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:27:24.029: INFO: rc: 1 Nov 5 23:27:24.029: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:24.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:27:25.024: INFO: rc: 1 Nov 5 23:27:25.024: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:25.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:27:26.037: INFO: rc: 1 Nov 5 23:27:26.037: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:26.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:27:27.026: INFO: rc: 1 Nov 5 23:27:27.026: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:27.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:27:28.024: INFO: rc: 1 Nov 5 23:27:28.024: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:28.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:27:29.049: INFO: rc: 1 Nov 5 23:27:29.049: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:29.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:27:30.083: INFO: rc: 1 Nov 5 23:27:30.083: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:30.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:27:31.134: INFO: rc: 1 Nov 5 23:27:31.134: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:31.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:27:32.013: INFO: rc: 1 Nov 5 23:27:32.013: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31054 + echo hostName nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:32.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:27:33.026: INFO: rc: 1 Nov 5 23:27:33.026: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:33.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:27:34.022: INFO: rc: 1 Nov 5 23:27:34.023: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:34.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:27:34.999: INFO: rc: 1 Nov 5 23:27:34.999: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:35.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:27:36.034: INFO: rc: 1 Nov 5 23:27:36.034: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:36.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:27:37.360: INFO: rc: 1 Nov 5 23:27:37.360: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:37.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:27:38.046: INFO: rc: 1 Nov 5 23:27:38.046: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:38.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:27:39.084: INFO: rc: 1 Nov 5 23:27:39.084: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:39.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:27:40.170: INFO: rc: 1 Nov 5 23:27:40.170: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31054 + echo hostName nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:40.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:27:41.038: INFO: rc: 1 Nov 5 23:27:41.038: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:41.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:27:42.025: INFO: rc: 1 Nov 5 23:27:42.025: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:42.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:27:43.050: INFO: rc: 1 Nov 5 23:27:43.050: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:43.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:27:44.027: INFO: rc: 1 Nov 5 23:27:44.027: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:44.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:27:45.040: INFO: rc: 1 Nov 5 23:27:45.040: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:45.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:27:46.030: INFO: rc: 1 Nov 5 23:27:46.030: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:46.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:27:47.060: INFO: rc: 1 Nov 5 23:27:47.060: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:47.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:27:48.018: INFO: rc: 1 Nov 5 23:27:48.018: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:48.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:27:49.043: INFO: rc: 1 Nov 5 23:27:49.043: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:49.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:27:50.007: INFO: rc: 1 Nov 5 23:27:50.007: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:50.781: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:27:51.306: INFO: rc: 1 Nov 5 23:27:51.307: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:51.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:27:52.053: INFO: rc: 1 Nov 5 23:27:52.053: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:52.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:27:53.309: INFO: rc: 1 Nov 5 23:27:53.309: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:53.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:27:54.036: INFO: rc: 1 Nov 5 23:27:54.036: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:54.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:27:55.156: INFO: rc: 1 Nov 5 23:27:55.156: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:55.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:27:56.017: INFO: rc: 1 Nov 5 23:27:56.017: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:56.781: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:27:57.346: INFO: rc: 1 Nov 5 23:27:57.346: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31054 + echo hostName nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:57.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:27:58.022: INFO: rc: 1 Nov 5 23:27:58.022: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:58.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:27:59.303: INFO: rc: 1 Nov 5 23:27:59.303: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:59.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:28:00.061: INFO: rc: 1 Nov 5 23:28:00.061: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:00.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:28:01.010: INFO: rc: 1 Nov 5 23:28:01.010: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:01.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:28:02.019: INFO: rc: 1 Nov 5 23:28:02.019: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:02.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:28:03.084: INFO: rc: 1 Nov 5 23:28:03.084: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:03.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:28:04.029: INFO: rc: 1 Nov 5 23:28:04.029: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:04.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:28:05.030: INFO: rc: 1 Nov 5 23:28:05.030: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:05.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:28:06.050: INFO: rc: 1 Nov 5 23:28:06.050: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:06.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:28:07.245: INFO: rc: 1 Nov 5 23:28:07.245: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:07.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:28:08.267: INFO: rc: 1 Nov 5 23:28:08.267: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:08.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:28:09.081: INFO: rc: 1 Nov 5 23:28:09.081: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:09.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:28:10.276: INFO: rc: 1 Nov 5 23:28:10.276: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:10.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:28:11.272: INFO: rc: 1 Nov 5 23:28:11.272: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:11.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:28:12.476: INFO: rc: 1 Nov 5 23:28:12.476: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:12.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:28:13.031: INFO: rc: 1 Nov 5 23:28:13.031: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:13.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:28:14.039: INFO: rc: 1 Nov 5 23:28:14.039: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:14.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:28:15.022: INFO: rc: 1 Nov 5 23:28:15.023: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:15.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:28:16.041: INFO: rc: 1 Nov 5 23:28:16.041: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:16.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:28:17.015: INFO: rc: 1 Nov 5 23:28:17.015: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:17.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:28:18.039: INFO: rc: 1 Nov 5 23:28:18.039: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:18.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:28:19.019: INFO: rc: 1 Nov 5 23:28:19.019: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:19.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:28:20.066: INFO: rc: 1 Nov 5 23:28:20.066: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31054 + echo hostName nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:20.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:28:21.057: INFO: rc: 1 Nov 5 23:28:21.057: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:21.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:28:21.993: INFO: rc: 1 Nov 5 23:28:21.994: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:22.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:28:23.011: INFO: rc: 1 Nov 5 23:28:23.011: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:23.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:28:23.994: INFO: rc: 1 Nov 5 23:28:23.994: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:24.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:28:25.068: INFO: rc: 1 Nov 5 23:28:25.068: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:25.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:28:26.278: INFO: rc: 1 Nov 5 23:28:26.278: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:26.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:28:27.063: INFO: rc: 1 Nov 5 23:28:27.063: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:27.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:28:28.043: INFO: rc: 1 Nov 5 23:28:28.043: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:28.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:28:29.032: INFO: rc: 1 Nov 5 23:28:29.032: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:29.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:28:30.012: INFO: rc: 1 Nov 5 23:28:30.012: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:30.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:28:31.303: INFO: rc: 1 Nov 5 23:28:31.303: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:31.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:28:32.137: INFO: rc: 1 Nov 5 23:28:32.137: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:32.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:28:33.034: INFO: rc: 1 Nov 5 23:28:33.034: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:33.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:28:34.286: INFO: rc: 1 Nov 5 23:28:34.286: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:34.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:28:35.156: INFO: rc: 1 Nov 5 23:28:35.156: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:35.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:28:36.025: INFO: rc: 1 Nov 5 23:28:36.026: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:36.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:28:37.327: INFO: rc: 1 Nov 5 23:28:37.327: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:37.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:28:38.008: INFO: rc: 1 Nov 5 23:28:38.008: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:38.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:28:39.047: INFO: rc: 1 Nov 5 23:28:39.047: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:39.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:28:40.033: INFO: rc: 1 Nov 5 23:28:40.033: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:40.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:28:41.007: INFO: rc: 1 Nov 5 23:28:41.007: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:41.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:28:42.165: INFO: rc: 1 Nov 5 23:28:42.165: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:42.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:28:43.035: INFO: rc: 1 Nov 5 23:28:43.035: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31054 + echo hostName nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:43.781: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:28:44.002: INFO: rc: 1 Nov 5 23:28:44.002: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:44.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:28:45.011: INFO: rc: 1 Nov 5 23:28:45.011: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31054 + echo hostName nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:45.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:28:46.238: INFO: rc: 1 Nov 5 23:28:46.238: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:46.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:28:47.539: INFO: rc: 1 Nov 5 23:28:47.539: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:47.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:28:48.013: INFO: rc: 1 Nov 5 23:28:48.013: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:48.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:28:49.021: INFO: rc: 1 Nov 5 23:28:49.021: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:49.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:28:50.022: INFO: rc: 1 Nov 5 23:28:50.022: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:50.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:28:51.062: INFO: rc: 1 Nov 5 23:28:51.062: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:51.781: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:28:51.999: INFO: rc: 1 Nov 5 23:28:51.999: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:52.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:28:53.085: INFO: rc: 1 Nov 5 23:28:53.085: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:53.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:28:54.014: INFO: rc: 1 Nov 5 23:28:54.015: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31054 + echo hostName nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:54.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:28:55.050: INFO: rc: 1 Nov 5 23:28:55.050: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:55.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:28:56.277: INFO: rc: 1 Nov 5 23:28:56.277: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:56.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:28:57.032: INFO: rc: 1 Nov 5 23:28:57.032: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:57.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:28:58.076: INFO: rc: 1 Nov 5 23:28:58.076: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:58.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:28:59.069: INFO: rc: 1 Nov 5 23:28:59.069: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:59.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:29:00.113: INFO: rc: 1 Nov 5 23:29:00.113: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:00.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:29:01.047: INFO: rc: 1 Nov 5 23:29:01.047: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:01.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:29:02.164: INFO: rc: 1 Nov 5 23:29:02.164: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:02.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:29:03.121: INFO: rc: 1 Nov 5 23:29:03.121: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:03.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:29:04.059: INFO: rc: 1 Nov 5 23:29:04.059: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:04.059: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054' Nov 5 23:29:04.649: INFO: rc: 1 Nov 5 23:29:04.649: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3510 exec execpod-affinityhm8jb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31054: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31054 nc: connect to 10.10.190.207 port 31054 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:04.649: FAIL: Unexpected error: <*errors.errorString | 0xc002075c30>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31054 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31054 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithOptionalTransition(0xc000aa2f20, 0x779f8f8, 0xc00ad342c0, 0xc007122f00, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2572 +0x625 k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBService(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2531 k8s.io/kubernetes/test/e2e/network.glob..func24.25() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1829 +0xa5 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00123f500) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc00123f500) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc00123f500, 0x70e7b58) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 Nov 5 23:29:04.651: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport in namespace services-3510, will wait for the garbage collector to delete the pods Nov 5 23:29:04.724: INFO: Deleting ReplicationController affinity-nodeport took: 3.759624ms Nov 5 23:29:04.824: INFO: Terminating ReplicationController affinity-nodeport pods took: 100.14288ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-3510". STEP: Found 27 events. Nov 5 23:29:18.843: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-5rvwl: { } Scheduled: Successfully assigned services-3510/affinity-nodeport-5rvwl to node2 Nov 5 23:29:18.843: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-fggqn: { } Scheduled: Successfully assigned services-3510/affinity-nodeport-fggqn to node2 Nov 5 23:29:18.843: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-spkgv: { } Scheduled: Successfully assigned services-3510/affinity-nodeport-spkgv to node1 Nov 5 23:29:18.843: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpod-affinityhm8jb: { } Scheduled: Successfully assigned services-3510/execpod-affinityhm8jb to node2 Nov 5 23:29:18.843: INFO: At 2021-11-05 23:26:46 +0000 UTC - event for affinity-nodeport: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-fggqn Nov 5 23:29:18.843: INFO: At 2021-11-05 23:26:46 +0000 UTC - event for affinity-nodeport: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-5rvwl Nov 5 23:29:18.843: INFO: At 2021-11-05 23:26:46 +0000 UTC - event for affinity-nodeport: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-spkgv Nov 5 23:29:18.843: INFO: At 2021-11-05 23:26:48 +0000 UTC - event for affinity-nodeport-spkgv: {kubelet node1} Created: Created container affinity-nodeport Nov 5 23:29:18.843: INFO: At 2021-11-05 23:26:48 +0000 UTC - event for affinity-nodeport-spkgv: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 498.287226ms Nov 5 23:29:18.843: INFO: At 2021-11-05 23:26:48 +0000 UTC - event for affinity-nodeport-spkgv: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Nov 5 23:29:18.843: INFO: At 2021-11-05 23:26:49 +0000 UTC - event for affinity-nodeport-spkgv: {kubelet node1} Started: Started container affinity-nodeport Nov 5 23:29:18.843: INFO: At 2021-11-05 23:26:50 +0000 UTC - event for affinity-nodeport-5rvwl: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Nov 5 23:29:18.843: INFO: At 2021-11-05 23:26:50 +0000 UTC - event for affinity-nodeport-fggqn: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Nov 5 23:29:18.843: INFO: At 2021-11-05 23:26:51 +0000 UTC - event for affinity-nodeport-5rvwl: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 955.431169ms Nov 5 23:29:18.843: INFO: At 2021-11-05 23:26:51 +0000 UTC - event for affinity-nodeport-5rvwl: {kubelet node2} Created: Created container affinity-nodeport Nov 5 23:29:18.843: INFO: At 2021-11-05 23:26:51 +0000 UTC - event for affinity-nodeport-fggqn: {kubelet node2} Created: Created container affinity-nodeport Nov 5 23:29:18.843: INFO: At 2021-11-05 23:26:51 +0000 UTC - event for affinity-nodeport-fggqn: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 1.047467984s Nov 5 23:29:18.843: INFO: At 2021-11-05 23:26:51 +0000 UTC - event for affinity-nodeport-fggqn: {kubelet node2} Started: Started container affinity-nodeport Nov 5 23:29:18.843: INFO: At 2021-11-05 23:26:52 +0000 UTC - event for affinity-nodeport-5rvwl: {kubelet node2} Started: Started container affinity-nodeport Nov 5 23:29:18.843: INFO: At 2021-11-05 23:26:58 +0000 UTC - event for execpod-affinityhm8jb: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Nov 5 23:29:18.843: INFO: At 2021-11-05 23:26:59 +0000 UTC - event for execpod-affinityhm8jb: {kubelet node2} Started: Started container agnhost-container Nov 5 23:29:18.843: INFO: At 2021-11-05 23:26:59 +0000 UTC - event for execpod-affinityhm8jb: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 530.279458ms Nov 5 23:29:18.843: INFO: At 2021-11-05 23:26:59 +0000 UTC - event for execpod-affinityhm8jb: {kubelet node2} Created: Created container agnhost-container Nov 5 23:29:18.843: INFO: At 2021-11-05 23:29:04 +0000 UTC - event for affinity-nodeport-5rvwl: {kubelet node2} Killing: Stopping container affinity-nodeport Nov 5 23:29:18.843: INFO: At 2021-11-05 23:29:04 +0000 UTC - event for affinity-nodeport-fggqn: {kubelet node2} Killing: Stopping container affinity-nodeport Nov 5 23:29:18.843: INFO: At 2021-11-05 23:29:04 +0000 UTC - event for affinity-nodeport-spkgv: {kubelet node1} Killing: Stopping container affinity-nodeport Nov 5 23:29:18.843: INFO: At 2021-11-05 23:29:04 +0000 UTC - event for execpod-affinityhm8jb: {kubelet node2} Killing: Stopping container agnhost-container Nov 5 23:29:18.845: INFO: POD NODE PHASE GRACE CONDITIONS Nov 5 23:29:18.845: INFO: Nov 5 23:29:18.849: INFO: Logging node info for node master1 Nov 5 23:29:18.851: INFO: Node Info: &Node{ObjectMeta:{master1 acabf68f-e6fa-4376-87a7-953399a106b3 48177 0 2021-11-05 20:58:52 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-11-05 20:58:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-11-05 21:01:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-11-05 21:01:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-11-05 21:06:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-05 21:04:29 +0000 UTC,LastTransitionTime:2021-11-05 21:04:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-05 23:29:09 +0000 UTC,LastTransitionTime:2021-11-05 20:58:50 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-05 23:29:09 +0000 UTC,LastTransitionTime:2021-11-05 20:58:50 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-05 23:29:09 +0000 UTC,LastTransitionTime:2021-11-05 20:58:50 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-05 23:29:09 +0000 UTC,LastTransitionTime:2021-11-05 21:01:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b66bbe4d404942179ce344aa1da0c494,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:b59c0f0e-9c14-460c-acfa-6e83037bd04e,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724468800,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:50c0d65dc6b2d7618e01dd2fb431c2312ad7490d0461d040b112b04809b07401 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:2012644549052fa07c43b0d19f320c871a25e105d0b23e33645e4f1bcf8fcd97 nginx:1.20.1-alpine],SizeBytes:22650454,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 5 23:29:18.852: INFO: Logging kubelet events for node master1 Nov 5 23:29:18.854: INFO: Logging pods the kubelet thinks is on node master1 Nov 5 23:29:18.877: INFO: kube-proxy-r4cf7 started at 2021-11-05 21:00:42 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:18.877: INFO: Container kube-proxy ready: true, restart count 1 Nov 5 23:29:18.877: INFO: kube-multus-ds-amd64-rr699 started at 2021-11-05 21:01:44 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:18.877: INFO: Container kube-multus ready: true, restart count 1 Nov 5 23:29:18.877: INFO: container-registry-65d7c44b96-dwrs5 started at 2021-11-05 21:06:01 +0000 UTC (0+2 container statuses recorded) Nov 5 23:29:18.877: INFO: Container docker-registry ready: true, restart count 0 Nov 5 23:29:18.877: INFO: Container nginx ready: true, restart count 0 Nov 5 23:29:18.877: INFO: kube-apiserver-master1 started at 2021-11-05 21:00:02 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:18.877: INFO: Container kube-apiserver ready: true, restart count 0 Nov 5 23:29:18.877: INFO: kube-controller-manager-master1 started at 2021-11-05 21:00:02 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:18.877: INFO: Container kube-controller-manager ready: true, restart count 3 Nov 5 23:29:18.877: INFO: kube-scheduler-master1 started at 2021-11-05 21:00:02 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:18.877: INFO: Container kube-scheduler ready: true, restart count 0 Nov 5 23:29:18.878: INFO: kube-flannel-hkkhj started at 2021-11-05 21:01:36 +0000 UTC (1+1 container statuses recorded) Nov 5 23:29:18.878: INFO: Init container install-cni ready: true, restart count 2 Nov 5 23:29:18.878: INFO: Container kube-flannel ready: true, restart count 2 Nov 5 23:29:18.878: INFO: coredns-8474476ff8-nq2jw started at 2021-11-05 21:02:14 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:18.878: INFO: Container coredns ready: true, restart count 2 Nov 5 23:29:18.878: INFO: node-exporter-lgdzv started at 2021-11-05 21:14:48 +0000 UTC (0+2 container statuses recorded) Nov 5 23:29:18.878: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 5 23:29:18.878: INFO: Container node-exporter ready: true, restart count 0 W1105 23:29:18.890779 26 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 5 23:29:18.971: INFO: Latency metrics for node master1 Nov 5 23:29:18.971: INFO: Logging node info for node master2 Nov 5 23:29:18.974: INFO: Node Info: &Node{ObjectMeta:{master2 004d4571-8588-4d18-93d0-ad0af4174866 48206 0 2021-11-05 20:59:23 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-11-05 20:59:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2021-11-05 21:00:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2021-11-05 21:01:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-11-05 21:09:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2021-11-05 21:09:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-05 21:04:41 +0000 UTC,LastTransitionTime:2021-11-05 21:04:41 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-05 23:29:11 +0000 UTC,LastTransitionTime:2021-11-05 20:59:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-05 23:29:11 +0000 UTC,LastTransitionTime:2021-11-05 20:59:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-05 23:29:11 +0000 UTC,LastTransitionTime:2021-11-05 20:59:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-05 23:29:11 +0000 UTC,LastTransitionTime:2021-11-05 21:01:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:0f1bc4b4acc1463992265eb9f006d2f4,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:d0e797a3-7d35-4e63-b584-b18006ef67fe,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724468800,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 5 23:29:18.974: INFO: Logging kubelet events for node master2 Nov 5 23:29:18.977: INFO: Logging pods the kubelet thinks is on node master2 Nov 5 23:29:18.995: INFO: kube-controller-manager-master2 started at 2021-11-05 21:04:18 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:18.995: INFO: Container kube-controller-manager ready: true, restart count 2 Nov 5 23:29:18.995: INFO: kube-proxy-9vm9v started at 2021-11-05 21:00:42 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:18.995: INFO: Container kube-proxy ready: true, restart count 1 Nov 5 23:29:18.995: INFO: kube-flannel-g7q4k started at 2021-11-05 21:01:36 +0000 UTC (1+1 container statuses recorded) Nov 5 23:29:18.995: INFO: Init container install-cni ready: true, restart count 0 Nov 5 23:29:18.995: INFO: Container kube-flannel ready: true, restart count 3 Nov 5 23:29:18.995: INFO: kube-apiserver-master2 started at 2021-11-05 21:00:02 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:18.995: INFO: Container kube-apiserver ready: true, restart count 0 Nov 5 23:29:18.995: INFO: kube-scheduler-master2 started at 2021-11-05 21:08:18 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:18.995: INFO: Container kube-scheduler ready: true, restart count 3 Nov 5 23:29:18.995: INFO: kube-multus-ds-amd64-m5646 started at 2021-11-05 21:01:44 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:18.995: INFO: Container kube-multus ready: true, restart count 1 Nov 5 23:29:18.995: INFO: node-feature-discovery-controller-cff799f9f-8cg9j started at 2021-11-05 21:09:34 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:18.995: INFO: Container nfd-controller ready: true, restart count 0 Nov 5 23:29:18.995: INFO: node-exporter-8mxjv started at 2021-11-05 21:14:48 +0000 UTC (0+2 container statuses recorded) Nov 5 23:29:18.995: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 5 23:29:18.995: INFO: Container node-exporter ready: true, restart count 0 W1105 23:29:19.032269 26 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 5 23:29:19.096: INFO: Latency metrics for node master2 Nov 5 23:29:19.096: INFO: Logging node info for node master3 Nov 5 23:29:19.099: INFO: Node Info: &Node{ObjectMeta:{master3 d3395dfc-1d8f-4527-88b4-7f472f6a6c0f 48297 0 2021-11-05 20:59:34 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-11-05 20:59:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-11-05 21:01:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-11-05 21:01:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-11-05 21:12:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-05 21:04:26 +0000 UTC,LastTransitionTime:2021-11-05 21:04:26 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-05 23:29:16 +0000 UTC,LastTransitionTime:2021-11-05 20:59:34 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-05 23:29:16 +0000 UTC,LastTransitionTime:2021-11-05 20:59:34 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-05 23:29:16 +0000 UTC,LastTransitionTime:2021-11-05 20:59:34 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-05 23:29:16 +0000 UTC,LastTransitionTime:2021-11-05 21:04:19 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:006015d4e2a7441aa293fbb9db938e38,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:a0f65291-184f-4994-a7ea-d1a5b4d71ffa,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724468800,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 5 23:29:19.099: INFO: Logging kubelet events for node master3 Nov 5 23:29:19.101: INFO: Logging pods the kubelet thinks is on node master3 Nov 5 23:29:19.113: INFO: kube-scheduler-master3 started at 2021-11-05 21:08:19 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:19.113: INFO: Container kube-scheduler ready: true, restart count 3 Nov 5 23:29:19.113: INFO: kube-proxy-s2pzt started at 2021-11-05 21:00:42 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:19.113: INFO: Container kube-proxy ready: true, restart count 2 Nov 5 23:29:19.113: INFO: kube-multus-ds-amd64-cp25f started at 2021-11-05 21:01:44 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:19.113: INFO: Container kube-multus ready: true, restart count 1 Nov 5 23:29:19.113: INFO: dns-autoscaler-7df78bfcfb-z9dxm started at 2021-11-05 21:02:12 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:19.113: INFO: Container autoscaler ready: true, restart count 1 Nov 5 23:29:19.113: INFO: kube-apiserver-master3 started at 2021-11-05 21:04:19 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:19.113: INFO: Container kube-apiserver ready: true, restart count 0 Nov 5 23:29:19.113: INFO: kube-controller-manager-master3 started at 2021-11-05 21:00:02 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:19.113: INFO: Container kube-controller-manager ready: true, restart count 2 Nov 5 23:29:19.113: INFO: kube-flannel-f55xz started at 2021-11-05 21:01:36 +0000 UTC (1+1 container statuses recorded) Nov 5 23:29:19.113: INFO: Init container install-cni ready: true, restart count 0 Nov 5 23:29:19.113: INFO: Container kube-flannel ready: true, restart count 1 Nov 5 23:29:19.113: INFO: coredns-8474476ff8-qbn9j started at 2021-11-05 21:02:10 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:19.113: INFO: Container coredns ready: true, restart count 1 Nov 5 23:29:19.113: INFO: node-exporter-mqcvx started at 2021-11-05 21:14:48 +0000 UTC (0+2 container statuses recorded) Nov 5 23:29:19.113: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 5 23:29:19.113: INFO: Container node-exporter ready: true, restart count 0 W1105 23:29:19.129241 26 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 5 23:29:19.203: INFO: Latency metrics for node master3 Nov 5 23:29:19.203: INFO: Logging node info for node node1 Nov 5 23:29:19.206: INFO: Node Info: &Node{ObjectMeta:{node1 290b18e7-da33-4da8-b78a-8a7f28c49abf 48295 0 2021-11-05 21:00:39 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-11-05 21:00:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-11-05 21:00:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-11-05 21:01:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-11-05 21:09:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-11-05 21:13:07 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-11-05 21:13:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-05 21:04:40 +0000 UTC,LastTransitionTime:2021-11-05 21:04:40 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-05 23:29:16 +0000 UTC,LastTransitionTime:2021-11-05 21:00:39 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-05 23:29:16 +0000 UTC,LastTransitionTime:2021-11-05 21:00:39 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-05 23:29:16 +0000 UTC,LastTransitionTime:2021-11-05 21:00:39 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-05 23:29:16 +0000 UTC,LastTransitionTime:2021-11-05 21:01:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7f2fc144f1734ec29780a435d0602675,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:7c24c54c-15ba-4c20-b196-32ad0c82be71,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003432896,},ContainerImage{Names:[localhost:30500/cmk@sha256:8c85a99f0d621f4580064e48909402b99a2e98aff1c1e4164e4e52d46568c5be cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724468800,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:e30d246f33998514fb41fc34b11174e739efb54af715b53723b81cc5d2ebd29d nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[localhost:30500/tasextender@sha256:50c0d65dc6b2d7618e01dd2fb431c2312ad7490d0461d040b112b04809b07401 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:a296b4c6f6ee2b88f095b61e95c7ef4f51ba25598835b4978c9256d8c8ace48a alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 5 23:29:19.207: INFO: Logging kubelet events for node node1 Nov 5 23:29:19.208: INFO: Logging pods the kubelet thinks is on node node1 Nov 5 23:29:19.225: INFO: cmk-cfm9r started at 2021-11-05 21:13:47 +0000 UTC (0+2 container statuses recorded) Nov 5 23:29:19.225: INFO: Container nodereport ready: true, restart count 0 Nov 5 23:29:19.225: INFO: Container reconcile ready: true, restart count 0 Nov 5 23:29:19.225: INFO: prometheus-k8s-0 started at 2021-11-05 21:14:58 +0000 UTC (0+4 container statuses recorded) Nov 5 23:29:19.225: INFO: Container config-reloader ready: true, restart count 0 Nov 5 23:29:19.225: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Nov 5 23:29:19.225: INFO: Container grafana ready: true, restart count 0 Nov 5 23:29:19.225: INFO: Container prometheus ready: true, restart count 1 Nov 5 23:29:19.225: INFO: liveness-58ae240d-5759-470f-be6e-c54592abc01a started at 2021-11-05 23:28:35 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:19.225: INFO: Container agnhost-container ready: true, restart count 0 Nov 5 23:29:19.225: INFO: kube-proxy-mc4cs started at 2021-11-05 21:00:42 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:19.225: INFO: Container kube-proxy ready: true, restart count 2 Nov 5 23:29:19.225: INFO: affinity-nodeport-timeout-pz5p8 started at 2021-11-05 23:27:10 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:19.225: INFO: Container affinity-nodeport-timeout ready: true, restart count 0 Nov 5 23:29:19.225: INFO: affinity-clusterip-transition-vg6mb started at 2021-11-05 23:29:07 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:19.225: INFO: Container affinity-clusterip-transition ready: true, restart count 0 Nov 5 23:29:19.225: INFO: test-webserver-e71c8083-eaeb-4cb0-956a-7b0efb4178ab started at 2021-11-05 23:27:29 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:19.225: INFO: Container test-webserver ready: true, restart count 0 Nov 5 23:29:19.225: INFO: execpod-affinitywtzlg started at 2021-11-05 23:28:14 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:19.225: INFO: Container agnhost-container ready: true, restart count 0 Nov 5 23:29:19.225: INFO: kube-flannel-hxwks started at 2021-11-05 21:01:36 +0000 UTC (1+1 container statuses recorded) Nov 5 23:29:19.225: INFO: Init container install-cni ready: true, restart count 2 Nov 5 23:29:19.225: INFO: Container kube-flannel ready: true, restart count 3 Nov 5 23:29:19.225: INFO: kube-multus-ds-amd64-mqrl8 started at 2021-11-05 21:01:44 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:19.225: INFO: Container kube-multus ready: true, restart count 1 Nov 5 23:29:19.225: INFO: kubernetes-dashboard-785dcbb76d-9wtdz started at 2021-11-05 21:02:14 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:19.225: INFO: Container kubernetes-dashboard ready: true, restart count 1 Nov 5 23:29:19.225: INFO: cmk-init-discover-node1-nnkks started at 2021-11-05 21:13:04 +0000 UTC (0+3 container statuses recorded) Nov 5 23:29:19.225: INFO: Container discover ready: false, restart count 0 Nov 5 23:29:19.225: INFO: Container init ready: false, restart count 0 Nov 5 23:29:19.225: INFO: Container install ready: false, restart count 0 Nov 5 23:29:19.225: INFO: cmk-webhook-6c9d5f8578-wq5mk started at 2021-11-05 21:13:47 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:19.225: INFO: Container cmk-webhook ready: true, restart count 0 Nov 5 23:29:19.225: INFO: node-exporter-fvksz started at 2021-11-05 21:14:48 +0000 UTC (0+2 container statuses recorded) Nov 5 23:29:19.225: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 5 23:29:19.225: INFO: Container node-exporter ready: true, restart count 0 Nov 5 23:29:19.225: INFO: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s started at 2021-11-05 21:17:51 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:19.225: INFO: Container tas-extender ready: true, restart count 0 Nov 5 23:29:19.225: INFO: collectd-5k6s9 started at 2021-11-05 21:18:40 +0000 UTC (0+3 container statuses recorded) Nov 5 23:29:19.225: INFO: Container collectd ready: true, restart count 0 Nov 5 23:29:19.225: INFO: Container collectd-exporter ready: true, restart count 0 Nov 5 23:29:19.225: INFO: Container rbac-proxy ready: true, restart count 0 Nov 5 23:29:19.225: INFO: nginx-proxy-node1 started at 2021-11-05 21:00:39 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:19.225: INFO: Container nginx-proxy ready: true, restart count 2 Nov 5 23:29:19.225: INFO: affinity-nodeport-transition-wrj2s started at 2021-11-05 23:28:08 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:19.225: INFO: Container affinity-nodeport-transition ready: true, restart count 0 Nov 5 23:29:19.225: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-l4npn started at 2021-11-05 21:10:45 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:19.225: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 5 23:29:19.225: INFO: var-expansion-6088554d-bb50-477f-92c8-dc7b74a322eb started at 2021-11-05 23:28:29 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:19.225: INFO: Container dapi-container ready: false, restart count 0 Nov 5 23:29:19.225: INFO: node-feature-discovery-worker-spmbf started at 2021-11-05 21:09:34 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:19.225: INFO: Container nfd-worker ready: true, restart count 0 W1105 23:29:19.242164 26 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 5 23:29:19.451: INFO: Latency metrics for node node1 Nov 5 23:29:19.451: INFO: Logging node info for node node2 Nov 5 23:29:19.453: INFO: Node Info: &Node{ObjectMeta:{node2 7d7e71f0-82d7-49ba-b69a-56600dd59b3f 48280 0 2021-11-05 21:00:39 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-11-05 21:00:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-11-05 21:00:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-11-05 21:01:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-11-05 21:09:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-11-05 21:13:29 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-11-05 21:13:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-05 21:04:43 +0000 UTC,LastTransitionTime:2021-11-05 21:04:43 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-05 23:29:14 +0000 UTC,LastTransitionTime:2021-11-05 21:00:39 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-05 23:29:14 +0000 UTC,LastTransitionTime:2021-11-05 21:00:39 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-05 23:29:14 +0000 UTC,LastTransitionTime:2021-11-05 21:00:39 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-05 23:29:14 +0000 UTC,LastTransitionTime:2021-11-05 21:01:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:415d65c0f8484c488059b324e675b5bd,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:c5482a76-3a9a-45bb-ab12-c74550bfe71f,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:724468800,},ContainerImage{Names:[localhost:30500/cmk@sha256:8c85a99f0d621f4580064e48909402b99a2e98aff1c1e4164e4e52d46568c5be localhost:30500/cmk:v1.5.1],SizeBytes:724468800,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:e30d246f33998514fb41fc34b11174e739efb54af715b53723b81cc5d2ebd29d localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 5 23:29:19.454: INFO: Logging kubelet events for node node2 Nov 5 23:29:19.457: INFO: Logging pods the kubelet thinks is on node node2 Nov 5 23:29:19.478: INFO: test-webserver-90a45a64-d28b-4d0a-a780-0078e465d503 started at 2021-11-05 23:29:00 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:19.478: INFO: Container test-webserver ready: false, restart count 0 Nov 5 23:29:19.478: INFO: sample-webhook-deployment-78988fc6cd-grljp started at 2021-11-05 23:29:08 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:19.478: INFO: Container sample-webhook ready: true, restart count 0 Nov 5 23:29:19.478: INFO: kube-flannel-cqj7j started at 2021-11-05 21:01:36 +0000 UTC (1+1 container statuses recorded) Nov 5 23:29:19.478: INFO: Init container install-cni ready: true, restart count 1 Nov 5 23:29:19.478: INFO: Container kube-flannel ready: true, restart count 2 Nov 5 23:29:19.478: INFO: affinity-nodeport-transition-c9bcs started at 2021-11-05 23:28:08 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:19.478: INFO: Container affinity-nodeport-transition ready: true, restart count 0 Nov 5 23:29:19.478: INFO: kube-proxy-j9lmg started at 2021-11-05 21:00:42 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:19.478: INFO: Container kube-proxy ready: true, restart count 2 Nov 5 23:29:19.478: INFO: affinity-clusterip-transition-fcxgc started at 2021-11-05 23:29:07 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:19.478: INFO: Container affinity-clusterip-transition ready: true, restart count 0 Nov 5 23:29:19.478: INFO: execpod-affinityjsdb8 started at 2021-11-05 23:29:16 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:19.478: INFO: Container agnhost-container ready: false, restart count 0 Nov 5 23:29:19.478: INFO: collectd-r2g57 started at 2021-11-05 21:18:40 +0000 UTC (0+3 container statuses recorded) Nov 5 23:29:19.478: INFO: Container collectd ready: true, restart count 0 Nov 5 23:29:19.478: INFO: Container collectd-exporter ready: true, restart count 0 Nov 5 23:29:19.478: INFO: Container rbac-proxy ready: true, restart count 0 Nov 5 23:29:19.478: INFO: ss2-1 started at 2021-11-05 23:28:10 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:19.478: INFO: Container webserver ready: false, restart count 0 Nov 5 23:29:19.478: INFO: kube-multus-ds-amd64-p7bxx started at 2021-11-05 21:01:44 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:19.478: INFO: Container kube-multus ready: true, restart count 1 Nov 5 23:29:19.478: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-tzh4p started at 2021-11-05 21:10:45 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:19.478: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 5 23:29:19.478: INFO: affinity-nodeport-timeout-vjfr2 started at 2021-11-05 23:27:10 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:19.478: INFO: Container affinity-nodeport-timeout ready: true, restart count 0 Nov 5 23:29:19.478: INFO: cmk-bnvd2 started at 2021-11-05 21:13:46 +0000 UTC (0+2 container statuses recorded) Nov 5 23:29:19.478: INFO: Container nodereport ready: true, restart count 0 Nov 5 23:29:19.478: INFO: Container reconcile ready: true, restart count 0 Nov 5 23:29:19.478: INFO: prometheus-operator-585ccfb458-vh55q started at 2021-11-05 21:14:41 +0000 UTC (0+2 container statuses recorded) Nov 5 23:29:19.478: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 5 23:29:19.478: INFO: Container prometheus-operator ready: true, restart count 0 Nov 5 23:29:19.478: INFO: affinity-clusterip-transition-4shcz started at 2021-11-05 23:29:07 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:19.478: INFO: Container affinity-clusterip-transition ready: true, restart count 0 Nov 5 23:29:19.478: INFO: execpod-affinityxz4jp started at 2021-11-05 23:27:16 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:19.478: INFO: Container agnhost-container ready: true, restart count 0 Nov 5 23:29:19.478: INFO: ss2-0 started at 2021-11-05 23:29:01 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:19.478: INFO: Container webserver ready: true, restart count 0 Nov 5 23:29:19.478: INFO: nginx-proxy-node2 started at 2021-11-05 21:00:39 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:19.478: INFO: Container nginx-proxy ready: true, restart count 2 Nov 5 23:29:19.478: INFO: node-feature-discovery-worker-pn6cr started at 2021-11-05 21:09:34 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:19.478: INFO: Container nfd-worker ready: true, restart count 0 Nov 5 23:29:19.478: INFO: cmk-init-discover-node2-9svdd started at 2021-11-05 21:13:24 +0000 UTC (0+3 container statuses recorded) Nov 5 23:29:19.478: INFO: Container discover ready: false, restart count 0 Nov 5 23:29:19.478: INFO: Container init ready: false, restart count 0 Nov 5 23:29:19.478: INFO: Container install ready: false, restart count 0 Nov 5 23:29:19.478: INFO: node-exporter-k7p79 started at 2021-11-05 21:14:48 +0000 UTC (0+2 container statuses recorded) Nov 5 23:29:19.478: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 5 23:29:19.478: INFO: Container node-exporter ready: true, restart count 0 Nov 5 23:29:19.478: INFO: affinity-nodeport-timeout-qsgkj started at 2021-11-05 23:27:10 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:19.478: INFO: Container affinity-nodeport-timeout ready: true, restart count 0 Nov 5 23:29:19.478: INFO: ss2-2 started at 2021-11-05 23:29:04 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:19.478: INFO: Container webserver ready: true, restart count 0 Nov 5 23:29:19.478: INFO: kubernetes-metrics-scraper-5558854cb-v9vgg started at 2021-11-05 21:02:14 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:19.478: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Nov 5 23:29:19.478: INFO: affinity-nodeport-transition-dbbbd started at 2021-11-05 23:28:08 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:19.478: INFO: Container affinity-nodeport-transition ready: true, restart count 0 W1105 23:29:19.492057 26 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 5 23:29:19.852: INFO: Latency metrics for node node2 Nov 5 23:29:19.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3510" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [153.262 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity work for NodePort service [LinuxOnly] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 5 23:29:04.650: Unexpected error: <*errors.errorString | 0xc002075c30>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31054 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31054 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2572 ------------------------------ {"msg":"FAILED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":30,"skipped":451,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:29:07.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Nov 5 23:29:08.416: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Nov 5 23:29:10.425: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751748, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751748, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751748, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751748, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 5 23:29:12.428: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751748, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751748, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751748, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751748, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 5 23:29:15.435: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 5 23:29:15.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-8922-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:29:23.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5618" for this suite. STEP: Destroying namespace "webhook-5618-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.607 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":-1,"completed":27,"skipped":371,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:29:23.606: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if kubectl can dry-run update Pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Nov 5 23:29:23.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7119 run e2e-test-httpd-pod --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 --labels=run=e2e-test-httpd-pod' Nov 5 23:29:23.775: INFO: stderr: "" Nov 5 23:29:23.775: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: replace the image in the pod with server-side dry-run Nov 5 23:29:23.775: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7119 patch pod e2e-test-httpd-pod -p {"spec":{"containers":[{"name": "e2e-test-httpd-pod","image": "k8s.gcr.io/e2e-test-images/busybox:1.29-1"}]}} --dry-run=server' Nov 5 23:29:24.202: INFO: stderr: "" Nov 5 23:29:24.203: INFO: stdout: "pod/e2e-test-httpd-pod patched\n" STEP: verifying the pod e2e-test-httpd-pod has the right image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Nov 5 23:29:24.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7119 delete pods e2e-test-httpd-pod' Nov 5 23:29:26.095: INFO: stderr: "" Nov 5 23:29:26.095: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:29:26.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7119" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":-1,"completed":28,"skipped":399,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} S ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:29:07.046: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-7555 STEP: creating service affinity-clusterip-transition in namespace services-7555 STEP: creating replication controller affinity-clusterip-transition in namespace services-7555 I1105 23:29:07.079678 30 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-7555, replica count: 3 I1105 23:29:10.131971 30 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1105 23:29:13.132716 30 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1105 23:29:16.133585 30 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 5 23:29:16.137: INFO: Creating new exec pod Nov 5 23:29:21.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7555 exec execpod-affinityjsdb8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Nov 5 23:29:21.443: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" Nov 5 23:29:21.443: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Nov 5 23:29:21.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7555 exec execpod-affinityjsdb8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.4.39 80' Nov 5 23:29:21.713: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.4.39 80\nConnection to 10.233.4.39 80 port [tcp/http] succeeded!\n" Nov 5 23:29:21.713: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Nov 5 23:29:21.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7555 exec execpod-affinityjsdb8 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.233.4.39:80/ ; done' Nov 5 23:29:22.047: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.4.39:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.4.39:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.4.39:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.4.39:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.4.39:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.4.39:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.4.39:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.4.39:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.4.39:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.4.39:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.4.39:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.4.39:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.4.39:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.4.39:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.4.39:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.4.39:80/\n" Nov 5 23:29:22.047: INFO: stdout: "\naffinity-clusterip-transition-fcxgc\naffinity-clusterip-transition-vg6mb\naffinity-clusterip-transition-fcxgc\naffinity-clusterip-transition-fcxgc\naffinity-clusterip-transition-vg6mb\naffinity-clusterip-transition-4shcz\naffinity-clusterip-transition-fcxgc\naffinity-clusterip-transition-fcxgc\naffinity-clusterip-transition-fcxgc\naffinity-clusterip-transition-vg6mb\naffinity-clusterip-transition-4shcz\naffinity-clusterip-transition-vg6mb\naffinity-clusterip-transition-4shcz\naffinity-clusterip-transition-4shcz\naffinity-clusterip-transition-fcxgc\naffinity-clusterip-transition-4shcz" Nov 5 23:29:22.047: INFO: Received response from host: affinity-clusterip-transition-fcxgc Nov 5 23:29:22.047: INFO: Received response from host: affinity-clusterip-transition-vg6mb Nov 5 23:29:22.047: INFO: Received response from host: affinity-clusterip-transition-fcxgc Nov 5 23:29:22.047: INFO: Received response from host: affinity-clusterip-transition-fcxgc Nov 5 23:29:22.047: INFO: Received response from host: affinity-clusterip-transition-vg6mb Nov 5 23:29:22.047: INFO: Received response from host: affinity-clusterip-transition-4shcz Nov 5 23:29:22.047: INFO: Received response from host: affinity-clusterip-transition-fcxgc Nov 5 23:29:22.047: INFO: Received response from host: affinity-clusterip-transition-fcxgc Nov 5 23:29:22.047: INFO: Received response from host: affinity-clusterip-transition-fcxgc Nov 5 23:29:22.047: INFO: Received response from host: affinity-clusterip-transition-vg6mb Nov 5 23:29:22.047: INFO: Received response from host: affinity-clusterip-transition-4shcz Nov 5 23:29:22.047: INFO: Received response from host: affinity-clusterip-transition-vg6mb Nov 5 23:29:22.047: INFO: Received response from host: affinity-clusterip-transition-4shcz Nov 5 23:29:22.047: INFO: Received response from host: affinity-clusterip-transition-4shcz Nov 5 23:29:22.047: INFO: Received response from host: affinity-clusterip-transition-fcxgc Nov 5 23:29:22.047: INFO: Received response from host: affinity-clusterip-transition-4shcz Nov 5 23:29:22.053: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7555 exec execpod-affinityjsdb8 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.233.4.39:80/ ; done' Nov 5 23:29:22.336: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.4.39:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.4.39:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.4.39:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.4.39:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.4.39:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.4.39:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.4.39:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.4.39:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.4.39:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.4.39:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.4.39:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.4.39:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.4.39:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.4.39:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.4.39:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.4.39:80/\n" Nov 5 23:29:22.336: INFO: stdout: "\naffinity-clusterip-transition-4shcz\naffinity-clusterip-transition-4shcz\naffinity-clusterip-transition-4shcz\naffinity-clusterip-transition-4shcz\naffinity-clusterip-transition-4shcz\naffinity-clusterip-transition-4shcz\naffinity-clusterip-transition-4shcz\naffinity-clusterip-transition-4shcz\naffinity-clusterip-transition-4shcz\naffinity-clusterip-transition-4shcz\naffinity-clusterip-transition-4shcz\naffinity-clusterip-transition-4shcz\naffinity-clusterip-transition-4shcz\naffinity-clusterip-transition-4shcz\naffinity-clusterip-transition-4shcz\naffinity-clusterip-transition-4shcz" Nov 5 23:29:22.336: INFO: Received response from host: affinity-clusterip-transition-4shcz Nov 5 23:29:22.336: INFO: Received response from host: affinity-clusterip-transition-4shcz Nov 5 23:29:22.336: INFO: Received response from host: affinity-clusterip-transition-4shcz Nov 5 23:29:22.336: INFO: Received response from host: affinity-clusterip-transition-4shcz Nov 5 23:29:22.336: INFO: Received response from host: affinity-clusterip-transition-4shcz Nov 5 23:29:22.336: INFO: Received response from host: affinity-clusterip-transition-4shcz Nov 5 23:29:22.336: INFO: Received response from host: affinity-clusterip-transition-4shcz Nov 5 23:29:22.336: INFO: Received response from host: affinity-clusterip-transition-4shcz Nov 5 23:29:22.336: INFO: Received response from host: affinity-clusterip-transition-4shcz Nov 5 23:29:22.336: INFO: Received response from host: affinity-clusterip-transition-4shcz Nov 5 23:29:22.336: INFO: Received response from host: affinity-clusterip-transition-4shcz Nov 5 23:29:22.336: INFO: Received response from host: affinity-clusterip-transition-4shcz Nov 5 23:29:22.336: INFO: Received response from host: affinity-clusterip-transition-4shcz Nov 5 23:29:22.336: INFO: Received response from host: affinity-clusterip-transition-4shcz Nov 5 23:29:22.336: INFO: Received response from host: affinity-clusterip-transition-4shcz Nov 5 23:29:22.336: INFO: Received response from host: affinity-clusterip-transition-4shcz Nov 5 23:29:22.336: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-7555, will wait for the garbage collector to delete the pods Nov 5 23:29:22.398: INFO: Deleting ReplicationController affinity-clusterip-transition took: 3.319473ms Nov 5 23:29:22.500: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 101.186964ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:29:31.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7555" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:23.972 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":19,"skipped":317,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:29:19.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:29:36.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-663" for this suite. • [SLOW TEST:17.070 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":-1,"completed":31,"skipped":462,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:29:26.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 5 23:29:26.131: INFO: Creating deployment "webserver-deployment" Nov 5 23:29:26.134: INFO: Waiting for observed generation 1 Nov 5 23:29:28.141: INFO: Waiting for all required pods to come up Nov 5 23:29:28.145: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Nov 5 23:29:38.152: INFO: Waiting for deployment "webserver-deployment" to complete Nov 5 23:29:38.156: INFO: Updating deployment "webserver-deployment" with a non-existent image Nov 5 23:29:38.164: INFO: Updating deployment webserver-deployment Nov 5 23:29:38.164: INFO: Waiting for observed generation 2 Nov 5 23:29:40.169: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Nov 5 23:29:40.171: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Nov 5 23:29:40.173: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Nov 5 23:29:40.181: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Nov 5 23:29:40.181: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Nov 5 23:29:40.183: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Nov 5 23:29:40.187: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Nov 5 23:29:40.187: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Nov 5 23:29:40.193: INFO: Updating deployment webserver-deployment Nov 5 23:29:40.193: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Nov 5 23:29:40.198: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Nov 5 23:29:40.200: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Nov 5 23:29:40.206: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-6532 fac53792-8563-4228-8bc0-092c5a91613e 48869 3 2021-11-05 23:29:26 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-11-05 23:29:26 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-11-05 23:29:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0044a6948 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:5,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-795d758f88" is progressing.,LastUpdateTime:2021-11-05 23:29:38 +0000 UTC,LastTransitionTime:2021-11-05 23:29:26 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2021-11-05 23:29:40 +0000 UTC,LastTransitionTime:2021-11-05 23:29:40 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Nov 5 23:29:40.211: INFO: New ReplicaSet "webserver-deployment-795d758f88" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-795d758f88 deployment-6532 dc8314b2-77b9-45b3-ad7c-1247d5a5ad80 48867 3 2021-11-05 23:29:38 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment fac53792-8563-4228-8bc0-092c5a91613e 0xc0034d64b7 0xc0034d64b8}] [] [{kube-controller-manager Update apps/v1 2021-11-05 23:29:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fac53792-8563-4228-8bc0-092c5a91613e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 795d758f88,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0034d6538 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Nov 5 23:29:40.211: INFO: All old ReplicaSets of Deployment "webserver-deployment": Nov 5 23:29:40.211: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-847dcfb7fb deployment-6532 9171fbec-b15c-4bab-ac8e-fbc445d2f225 48865 3 2021-11-05 23:29:26 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment fac53792-8563-4228-8bc0-092c5a91613e 0xc0034d6597 0xc0034d6598}] [] [{kube-controller-manager Update apps/v1 2021-11-05 23:29:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fac53792-8563-4228-8bc0-092c5a91613e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 847dcfb7fb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0034d6608 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Nov 5 23:29:40.217: INFO: Pod "webserver-deployment-795d758f88-8qqn2" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-8qqn2 webserver-deployment-795d758f88- deployment-6532 255cd40b-95e2-4ceb-8089-5e47d886e2ff 48841 0 2021-11-05 23:29:38 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 dc8314b2-77b9-45b3-ad7c-1247d5a5ad80 0xc0044a6d1f 0xc0044a6d30}] [] [{kube-controller-manager Update v1 2021-11-05 23:29:38 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dc8314b2-77b9-45b3-ad7c-1247d5a5ad80\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-mgx8z,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mgx8z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:29:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 5 23:29:40.217: INFO: Pod "webserver-deployment-795d758f88-9dvw8" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-9dvw8 webserver-deployment-795d758f88- deployment-6532 3e2b2bea-966c-4118-b216-ce3e8d7564c1 48848 0 2021-11-05 23:29:38 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 dc8314b2-77b9-45b3-ad7c-1247d5a5ad80 0xc0044a6e9f 0xc0044a6eb0}] [] [{kube-controller-manager Update v1 2021-11-05 23:29:38 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dc8314b2-77b9-45b3-ad7c-1247d5a5ad80\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-11-05 23:29:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-8m79q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8m79q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:29:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:29:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:29:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:29:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:,StartTime:2021-11-05 23:29:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 5 23:29:40.217: INFO: Pod "webserver-deployment-795d758f88-9gb4w" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-9gb4w webserver-deployment-795d758f88- deployment-6532 4101ed3c-6013-4482-97b8-1f50c7403bd7 48877 0 2021-11-05 23:29:40 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 dc8314b2-77b9-45b3-ad7c-1247d5a5ad80 0xc0044a709f 0xc0044a70b0}] [] [{kube-controller-manager Update v1 2021-11-05 23:29:40 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dc8314b2-77b9-45b3-ad7c-1247d5a5ad80\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-4qzj7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4qzj7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:29:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 5 23:29:40.218: INFO: Pod "webserver-deployment-795d758f88-c2c2w" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-c2c2w webserver-deployment-795d758f88- deployment-6532 859c7d70-7d7a-474c-a290-f6ff0800ade6 48861 0 2021-11-05 23:29:38 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 dc8314b2-77b9-45b3-ad7c-1247d5a5ad80 0xc0044a721f 0xc0044a7230}] [] [{kube-controller-manager Update v1 2021-11-05 23:29:38 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dc8314b2-77b9-45b3-ad7c-1247d5a5ad80\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-11-05 23:29:39 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-nbcsn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nbcsn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:29:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:29:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:29:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:29:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:,StartTime:2021-11-05 23:29:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 5 23:29:40.218: INFO: Pod "webserver-deployment-795d758f88-h5zmt" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-h5zmt webserver-deployment-795d758f88- deployment-6532 febe4637-743c-4363-bda3-ced2d43b6d4e 48838 0 2021-11-05 23:29:38 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 dc8314b2-77b9-45b3-ad7c-1247d5a5ad80 0xc0044a73ff 0xc0044a7410}] [] [{kube-controller-manager Update v1 2021-11-05 23:29:38 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dc8314b2-77b9-45b3-ad7c-1247d5a5ad80\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-11-05 23:29:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-jqgwb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jqgwb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:29:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:29:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:29:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:29:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:,StartTime:2021-11-05 23:29:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 5 23:29:40.218: INFO: Pod "webserver-deployment-795d758f88-pxph4" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-pxph4 webserver-deployment-795d758f88- deployment-6532 2d922089-38de-4f05-b716-f20ae3b21be5 48832 0 2021-11-05 23:29:38 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 dc8314b2-77b9-45b3-ad7c-1247d5a5ad80 0xc0044a75df 0xc0044a75f0}] [] [{kube-controller-manager Update v1 2021-11-05 23:29:38 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dc8314b2-77b9-45b3-ad7c-1247d5a5ad80\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-11-05 23:29:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-5dqqv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5dqqv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:29:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:29:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:29:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:29:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:,StartTime:2021-11-05 23:29:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 5 23:29:40.218: INFO: Pod "webserver-deployment-847dcfb7fb-4hlfm" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-4hlfm webserver-deployment-847dcfb7fb- deployment-6532 817c4ec5-f4e1-4fbf-ae9c-5dd6eabaecf5 48735 0 2021-11-05 23:29:26 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.94" ], "mac": "ca:c3:45:40:e4:1f", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.94" ], "mac": "ca:c3:45:40:e4:1f", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 9171fbec-b15c-4bab-ac8e-fbc445d2f225 0xc0044a77bf 0xc0044a77d0}] [] [{kube-controller-manager Update v1 2021-11-05 23:29:26 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9171fbec-b15c-4bab-ac8e-fbc445d2f225\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-11-05 23:29:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-11-05 23:29:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.94\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-jhthn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jhthn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:29:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:29:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:29:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:29:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.94,StartTime:2021-11-05 23:29:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-11-05 23:29:33 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://d7c362e4c13eacc2a7f72d6c978626dfe2b6a44497311184a15cbcc192718af3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.94,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 5 23:29:40.219: INFO: Pod "webserver-deployment-847dcfb7fb-ch925" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-ch925 webserver-deployment-847dcfb7fb- deployment-6532 370d48c4-374e-44d0-81a4-4af738ddd0b8 48681 0 2021-11-05 23:29:26 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.34" ], "mac": "92:00:54:f7:ae:48", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.34" ], "mac": "92:00:54:f7:ae:48", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 9171fbec-b15c-4bab-ac8e-fbc445d2f225 0xc0044a7ecf 0xc0044a7ee0}] [] [{kube-controller-manager Update v1 2021-11-05 23:29:26 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9171fbec-b15c-4bab-ac8e-fbc445d2f225\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-11-05 23:29:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-11-05 23:29:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.34\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-tmss6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tmss6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:29:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:29:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:29:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:29:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.34,StartTime:2021-11-05 23:29:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-11-05 23:29:30 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://448ad8441fcfc3b7cc99004dddf5c5956677f73ae15c3962e3835cd5f5cebe07,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.34,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 5 23:29:40.219: INFO: Pod "webserver-deployment-847dcfb7fb-gsfvl" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-gsfvl webserver-deployment-847dcfb7fb- deployment-6532 7b940ef2-2196-464e-a25d-24cf47bd48db 48690 0 2021-11-05 23:29:26 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.36" ], "mac": "aa:6a:10:f5:33:41", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.36" ], "mac": "aa:6a:10:f5:33:41", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 9171fbec-b15c-4bab-ac8e-fbc445d2f225 0xc004dba0cf 0xc004dba0e0}] [] [{kube-controller-manager Update v1 2021-11-05 23:29:26 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9171fbec-b15c-4bab-ac8e-fbc445d2f225\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-11-05 23:29:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-11-05 23:29:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.36\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-5cl7b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5cl7b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:29:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:29:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:29:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:29:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.36,StartTime:2021-11-05 23:29:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-11-05 23:29:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://96a2e9dcdb59ad3daa63dbf5fb1227f3916e5f9e97b4984c106c975752473006,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.36,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 5 23:29:40.219: INFO: Pod "webserver-deployment-847dcfb7fb-kfp8f" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-kfp8f webserver-deployment-847dcfb7fb- deployment-6532 c6102d15-30d7-4989-997e-cd2505b5f406 48875 0 2021-11-05 23:29:40 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 9171fbec-b15c-4bab-ac8e-fbc445d2f225 0xc004dba54f 0xc004dba560}] [] [{kube-controller-manager Update v1 2021-11-05 23:29:40 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9171fbec-b15c-4bab-ac8e-fbc445d2f225\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-j9k6x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j9k6x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 5 23:29:40.220: INFO: Pod "webserver-deployment-847dcfb7fb-kqtt5" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-kqtt5 webserver-deployment-847dcfb7fb- deployment-6532 74eb297f-bf74-4439-85f3-a3003c81ff38 48874 0 2021-11-05 23:29:40 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 9171fbec-b15c-4bab-ac8e-fbc445d2f225 0xc004dba7df 0xc004dba7f0}] [] [{kube-controller-manager Update v1 2021-11-05 23:29:40 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9171fbec-b15c-4bab-ac8e-fbc445d2f225\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-k4qbq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k4qbq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:29:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 5 23:29:40.220: INFO: Pod "webserver-deployment-847dcfb7fb-n766s" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-n766s webserver-deployment-847dcfb7fb- deployment-6532 09e8cb78-bab9-4a1c-99ae-14e05d5115a5 48684 0 2021-11-05 23:29:26 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.35" ], "mac": "16:2d:60:47:4b:73", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.35" ], "mac": "16:2d:60:47:4b:73", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 9171fbec-b15c-4bab-ac8e-fbc445d2f225 0xc004dbb0cf 0xc004dbb1d0}] [] [{kube-controller-manager Update v1 2021-11-05 23:29:26 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9171fbec-b15c-4bab-ac8e-fbc445d2f225\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-11-05 23:29:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-11-05 23:29:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.35\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-t4zhp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t4zhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:29:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:29:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:29:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:29:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.35,StartTime:2021-11-05 23:29:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-11-05 23:29:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://e2337fcb5a96a0ab8feda6e7dea0277ed30f40aa4453b2bc447c74c38f7d0b58,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.35,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 5 23:29:40.220: INFO: Pod "webserver-deployment-847dcfb7fb-nzjlr" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-nzjlr webserver-deployment-847dcfb7fb- deployment-6532 7ad951da-9470-4835-b525-c9533f0263af 48622 0 2021-11-05 23:29:26 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.33" ], "mac": "de:71:7a:1b:2e:ed", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.33" ], "mac": "de:71:7a:1b:2e:ed", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 9171fbec-b15c-4bab-ac8e-fbc445d2f225 0xc004dbb41f 0xc004dbb430}] [] [{kube-controller-manager Update v1 2021-11-05 23:29:26 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9171fbec-b15c-4bab-ac8e-fbc445d2f225\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-11-05 23:29:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-11-05 23:29:30 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.33\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-72c5f,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-72c5f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:29:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:29:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:29:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:29:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.33,StartTime:2021-11-05 23:29:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-11-05 23:29:30 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://4f7532bd95d39e1cc2f4c491d5df8381b6a3c849e18af5920a318bbd892c520e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.33,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 5 23:29:40.221: INFO: Pod "webserver-deployment-847dcfb7fb-qgzx2" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-qgzx2 webserver-deployment-847dcfb7fb- deployment-6532 b07d2f41-8ae9-4738-9d97-6fa4b59c0204 48687 0 2021-11-05 23:29:26 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.37" ], "mac": "1a:c2:ab:95:b3:c5", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.37" ], "mac": "1a:c2:ab:95:b3:c5", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 9171fbec-b15c-4bab-ac8e-fbc445d2f225 0xc004dbb61f 0xc004dbb630}] [] [{kube-controller-manager Update v1 2021-11-05 23:29:26 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9171fbec-b15c-4bab-ac8e-fbc445d2f225\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-11-05 23:29:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-11-05 23:29:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.37\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-27wn2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-27wn2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:29:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:29:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:29:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:29:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.37,StartTime:2021-11-05 23:29:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-11-05 23:29:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://a2a8fde060bac38dea4eea1a303ced45d8ef86058e664fa28645c75176bba117,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.37,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 5 23:29:40.221: INFO: Pod "webserver-deployment-847dcfb7fb-qtvlh" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-qtvlh webserver-deployment-847dcfb7fb- deployment-6532 d83a4186-71e5-4ec6-824c-614c6ff3c352 48713 0 2021-11-05 23:29:26 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.91" ], "mac": "46:38:84:4d:1a:9a", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.91" ], "mac": "46:38:84:4d:1a:9a", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 9171fbec-b15c-4bab-ac8e-fbc445d2f225 0xc004dbb81f 0xc004dbb830}] [] [{kube-controller-manager Update v1 2021-11-05 23:29:26 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9171fbec-b15c-4bab-ac8e-fbc445d2f225\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-11-05 23:29:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-11-05 23:29:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.91\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-m77mm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m77mm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:29:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:29:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:29:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:29:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.91,StartTime:2021-11-05 23:29:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-11-05 23:29:30 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://23e656f4d55c652ff726f68971eacf581d5232bed78bf6f415a294b0588102b6,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.91,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 5 23:29:40.221: INFO: Pod "webserver-deployment-847dcfb7fb-tvbwk" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-tvbwk webserver-deployment-847dcfb7fb- deployment-6532 12dc0692-15df-4a76-ad63-d0a7105aaf13 48739 0 2021-11-05 23:29:26 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.93" ], "mac": "56:37:38:a0:0d:1f", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.93" ], "mac": "56:37:38:a0:0d:1f", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 9171fbec-b15c-4bab-ac8e-fbc445d2f225 0xc004dbba1f 0xc004dbba30}] [] [{kube-controller-manager Update v1 2021-11-05 23:29:26 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9171fbec-b15c-4bab-ac8e-fbc445d2f225\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-11-05 23:29:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-11-05 23:29:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.93\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-bxcpc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bxcpc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:29:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:29:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:29:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:29:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.93,StartTime:2021-11-05 23:29:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-11-05 23:29:33 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://7b0489609e969475f4270267454b6fb44666f70869526f7d815ad16f52fac4c2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.93,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:29:40.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6532" for this suite. • [SLOW TEST:14.122 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":-1,"completed":29,"skipped":400,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:27:05.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-637 Nov 5 23:27:05.981: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:27:07.984: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:27:09.984: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) Nov 5 23:27:09.986: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Nov 5 23:27:10.241: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" Nov 5 23:27:10.241: INFO: stdout: "iptables" Nov 5 23:27:10.241: INFO: proxyMode: iptables Nov 5 23:27:10.247: INFO: Waiting for pod kube-proxy-mode-detector to disappear Nov 5 23:27:10.250: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-nodeport-timeout in namespace services-637 STEP: creating replication controller affinity-nodeport-timeout in namespace services-637 I1105 23:27:10.261945 39 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-637, replica count: 3 I1105 23:27:13.313032 39 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1105 23:27:16.314822 39 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 5 23:27:16.326: INFO: Creating new exec pod Nov 5 23:27:21.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Nov 5 23:27:21.649: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" Nov 5 23:27:21.649: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Nov 5 23:27:21.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.1.192 80' Nov 5 23:27:21.946: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.1.192 80\nConnection to 10.233.1.192 80 port [tcp/http] succeeded!\n" Nov 5 23:27:21.946: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Nov 5 23:27:21.946: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:27:22.365: INFO: rc: 1 Nov 5 23:27:22.365: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:23.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:27:23.802: INFO: rc: 1 Nov 5 23:27:23.803: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:24.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:27:24.646: INFO: rc: 1 Nov 5 23:27:24.646: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:25.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:27:25.636: INFO: rc: 1 Nov 5 23:27:25.636: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:26.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:27:26.611: INFO: rc: 1 Nov 5 23:27:26.611: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:27.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:27:27.646: INFO: rc: 1 Nov 5 23:27:27.646: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:28.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:27:28.588: INFO: rc: 1 Nov 5 23:27:28.588: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:29.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:27:29.612: INFO: rc: 1 Nov 5 23:27:29.612: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:30.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:27:31.088: INFO: rc: 1 Nov 5 23:27:31.089: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:31.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:27:31.615: INFO: rc: 1 Nov 5 23:27:31.615: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName+ nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:32.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:27:32.599: INFO: rc: 1 Nov 5 23:27:32.599: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:33.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:27:33.604: INFO: rc: 1 Nov 5 23:27:33.604: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:34.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:27:34.634: INFO: rc: 1 Nov 5 23:27:34.634: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:35.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:27:35.605: INFO: rc: 1 Nov 5 23:27:35.605: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:36.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:27:37.380: INFO: rc: 1 Nov 5 23:27:37.380: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:38.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:27:38.635: INFO: rc: 1 Nov 5 23:27:38.635: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:39.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:27:39.683: INFO: rc: 1 Nov 5 23:27:39.683: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:40.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:27:40.944: INFO: rc: 1 Nov 5 23:27:40.944: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:41.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:27:41.638: INFO: rc: 1 Nov 5 23:27:41.638: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:42.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:27:42.601: INFO: rc: 1 Nov 5 23:27:42.601: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:43.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:27:43.637: INFO: rc: 1 Nov 5 23:27:43.637: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:44.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:27:44.623: INFO: rc: 1 Nov 5 23:27:44.623: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:45.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:27:45.611: INFO: rc: 1 Nov 5 23:27:45.611: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:46.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:27:46.618: INFO: rc: 1 Nov 5 23:27:46.619: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:47.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:27:47.610: INFO: rc: 1 Nov 5 23:27:47.610: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:48.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:27:48.629: INFO: rc: 1 Nov 5 23:27:48.629: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30904 + echo hostName nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:49.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:27:49.592: INFO: rc: 1 Nov 5 23:27:49.592: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:50.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:27:50.650: INFO: rc: 1 Nov 5 23:27:50.650: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:51.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:27:51.622: INFO: rc: 1 Nov 5 23:27:51.622: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:52.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:27:52.644: INFO: rc: 1 Nov 5 23:27:52.644: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:53.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:27:53.623: INFO: rc: 1 Nov 5 23:27:53.623: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:54.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:27:54.611: INFO: rc: 1 Nov 5 23:27:54.611: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30904 + echo hostName nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:55.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:27:55.643: INFO: rc: 1 Nov 5 23:27:55.643: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:56.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:27:56.624: INFO: rc: 1 Nov 5 23:27:56.624: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:57.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:27:57.614: INFO: rc: 1 Nov 5 23:27:57.614: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:58.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:27:58.638: INFO: rc: 1 Nov 5 23:27:58.638: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:27:59.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:27:59.816: INFO: rc: 1 Nov 5 23:27:59.816: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:00.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:28:00.652: INFO: rc: 1 Nov 5 23:28:00.652: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:01.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:28:01.606: INFO: rc: 1 Nov 5 23:28:01.606: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:02.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:28:02.640: INFO: rc: 1 Nov 5 23:28:02.640: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:03.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:28:03.622: INFO: rc: 1 Nov 5 23:28:03.622: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:04.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:28:04.643: INFO: rc: 1 Nov 5 23:28:04.643: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:05.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:28:05.612: INFO: rc: 1 Nov 5 23:28:05.612: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:06.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:28:06.751: INFO: rc: 1 Nov 5 23:28:06.751: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:07.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:28:07.731: INFO: rc: 1 Nov 5 23:28:07.731: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:08.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:28:08.641: INFO: rc: 1 Nov 5 23:28:08.641: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:09.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:28:09.676: INFO: rc: 1 Nov 5 23:28:09.676: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:10.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:28:10.730: INFO: rc: 1 Nov 5 23:28:10.730: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30904 + echo hostName nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:11.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:28:11.628: INFO: rc: 1 Nov 5 23:28:11.628: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:12.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:28:12.620: INFO: rc: 1 Nov 5 23:28:12.620: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:13.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:28:13.644: INFO: rc: 1 Nov 5 23:28:13.644: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:14.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:28:14.642: INFO: rc: 1 Nov 5 23:28:14.642: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:15.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:28:15.607: INFO: rc: 1 Nov 5 23:28:15.607: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:16.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:28:16.607: INFO: rc: 1 Nov 5 23:28:16.607: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:17.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:28:17.597: INFO: rc: 1 Nov 5 23:28:17.597: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:18.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:28:18.611: INFO: rc: 1 Nov 5 23:28:18.611: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:19.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:28:19.623: INFO: rc: 1 Nov 5 23:28:19.623: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:20.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:28:20.633: INFO: rc: 1 Nov 5 23:28:20.633: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:21.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:28:21.593: INFO: rc: 1 Nov 5 23:28:21.593: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:22.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:28:22.596: INFO: rc: 1 Nov 5 23:28:22.596: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:23.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:28:23.588: INFO: rc: 1 Nov 5 23:28:23.588: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:24.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:28:24.594: INFO: rc: 1 Nov 5 23:28:24.594: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:25.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:28:25.659: INFO: rc: 1 Nov 5 23:28:25.659: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:26.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:28:26.884: INFO: rc: 1 Nov 5 23:28:26.884: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:27.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:28:27.631: INFO: rc: 1 Nov 5 23:28:27.631: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:28.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:28:28.642: INFO: rc: 1 Nov 5 23:28:28.642: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:29.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:28:29.659: INFO: rc: 1 Nov 5 23:28:29.659: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:30.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:28:30.788: INFO: rc: 1 Nov 5 23:28:30.788: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:31.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:28:31.610: INFO: rc: 1 Nov 5 23:28:31.610: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:32.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:28:32.628: INFO: rc: 1 Nov 5 23:28:32.628: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30904 + echo hostName nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:33.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:28:33.605: INFO: rc: 1 Nov 5 23:28:33.606: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:34.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:28:34.706: INFO: rc: 1 Nov 5 23:28:34.706: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:35.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:28:35.633: INFO: rc: 1 Nov 5 23:28:35.633: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:36.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:28:37.378: INFO: rc: 1 Nov 5 23:28:37.378: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:38.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:28:38.612: INFO: rc: 1 Nov 5 23:28:38.612: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:39.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:28:39.702: INFO: rc: 1 Nov 5 23:28:39.702: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:40.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:28:40.624: INFO: rc: 1 Nov 5 23:28:40.624: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:41.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:28:41.625: INFO: rc: 1 Nov 5 23:28:41.625: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:42.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:28:42.936: INFO: rc: 1 Nov 5 23:28:42.936: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:43.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:28:43.637: INFO: rc: 1 Nov 5 23:28:43.637: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:44.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:28:44.778: INFO: rc: 1 Nov 5 23:28:44.778: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:45.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:28:45.624: INFO: rc: 1 Nov 5 23:28:45.624: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:46.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:28:46.599: INFO: rc: 1 Nov 5 23:28:46.599: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:47.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:28:47.639: INFO: rc: 1 Nov 5 23:28:47.639: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:48.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:28:48.628: INFO: rc: 1 Nov 5 23:28:48.628: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:49.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:28:49.653: INFO: rc: 1 Nov 5 23:28:49.653: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:50.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:28:50.610: INFO: rc: 1 Nov 5 23:28:50.610: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:51.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:28:51.619: INFO: rc: 1 Nov 5 23:28:51.619: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:52.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:28:52.635: INFO: rc: 1 Nov 5 23:28:52.635: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:53.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:28:53.796: INFO: rc: 1 Nov 5 23:28:53.796: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:54.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:28:54.617: INFO: rc: 1 Nov 5 23:28:54.617: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30904 + echo hostName nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:55.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:28:55.622: INFO: rc: 1 Nov 5 23:28:55.623: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:56.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:28:56.668: INFO: rc: 1 Nov 5 23:28:56.668: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:57.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:28:57.869: INFO: rc: 1 Nov 5 23:28:57.870: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:58.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:28:58.709: INFO: rc: 1 Nov 5 23:28:58.709: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:59.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:28:59.622: INFO: rc: 1 Nov 5 23:28:59.622: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:00.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:29:00.595: INFO: rc: 1 Nov 5 23:29:00.595: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:01.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:29:01.892: INFO: rc: 1 Nov 5 23:29:01.892: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:02.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:29:03.119: INFO: rc: 1 Nov 5 23:29:03.119: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:03.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:29:03.758: INFO: rc: 1 Nov 5 23:29:03.758: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:04.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:29:04.651: INFO: rc: 1 Nov 5 23:29:04.651: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:05.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:29:05.675: INFO: rc: 1 Nov 5 23:29:05.675: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:06.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:29:06.762: INFO: rc: 1 Nov 5 23:29:06.762: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:07.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:29:07.878: INFO: rc: 1 Nov 5 23:29:07.879: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:08.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:29:08.762: INFO: rc: 1 Nov 5 23:29:08.762: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:09.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:29:09.773: INFO: rc: 1 Nov 5 23:29:09.773: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:10.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:29:10.926: INFO: rc: 1 Nov 5 23:29:10.926: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:11.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:29:11.852: INFO: rc: 1 Nov 5 23:29:11.852: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:12.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:29:12.647: INFO: rc: 1 Nov 5 23:29:12.647: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:13.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:29:13.668: INFO: rc: 1 Nov 5 23:29:13.668: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:14.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:29:14.654: INFO: rc: 1 Nov 5 23:29:14.654: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:15.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:29:15.626: INFO: rc: 1 Nov 5 23:29:15.626: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:16.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:29:16.649: INFO: rc: 1 Nov 5 23:29:16.649: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30904 + echo hostName nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:17.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:29:18.089: INFO: rc: 1 Nov 5 23:29:18.089: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:18.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:29:18.635: INFO: rc: 1 Nov 5 23:29:18.635: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:19.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:29:19.633: INFO: rc: 1 Nov 5 23:29:19.633: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:20.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:29:20.686: INFO: rc: 1 Nov 5 23:29:20.686: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:21.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:29:21.621: INFO: rc: 1 Nov 5 23:29:21.621: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:22.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:29:22.694: INFO: rc: 1 Nov 5 23:29:22.694: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:22.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904' Nov 5 23:29:22.950: INFO: rc: 1 Nov 5 23:29:22.950: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-637 exec execpod-affinityxz4jp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30904: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30904 nc: connect to 10.10.190.207 port 30904 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:22.951: FAIL: Unexpected error: <*errors.errorString | 0xc0025be8d0>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30904 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30904 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForSessionAffinityTimeout(0xc00121d1e0, 0x779f8f8, 0xc0019da580, 0xc001625680) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2493 +0x751 k8s.io/kubernetes/test/e2e/network.glob..func24.26() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1846 +0x9c k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001202780) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc001202780) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc001202780, 0x70e7b58) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 Nov 5 23:29:22.952: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-637, will wait for the garbage collector to delete the pods Nov 5 23:29:23.025: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 3.38029ms Nov 5 23:29:23.126: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 101.145419ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-637". STEP: Found 33 events. Nov 5 23:29:38.743: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-timeout-pz5p8: { } Scheduled: Successfully assigned services-637/affinity-nodeport-timeout-pz5p8 to node1 Nov 5 23:29:38.743: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-timeout-qsgkj: { } Scheduled: Successfully assigned services-637/affinity-nodeport-timeout-qsgkj to node2 Nov 5 23:29:38.743: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-timeout-vjfr2: { } Scheduled: Successfully assigned services-637/affinity-nodeport-timeout-vjfr2 to node2 Nov 5 23:29:38.743: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpod-affinityxz4jp: { } Scheduled: Successfully assigned services-637/execpod-affinityxz4jp to node2 Nov 5 23:29:38.743: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for kube-proxy-mode-detector: { } Scheduled: Successfully assigned services-637/kube-proxy-mode-detector to node2 Nov 5 23:29:38.743: INFO: At 2021-11-05 23:27:07 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 497.97707ms Nov 5 23:29:38.743: INFO: At 2021-11-05 23:27:07 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Nov 5 23:29:38.743: INFO: At 2021-11-05 23:27:08 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node2} Created: Created container agnhost-container Nov 5 23:29:38.743: INFO: At 2021-11-05 23:27:08 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node2} Started: Started container agnhost-container Nov 5 23:29:38.743: INFO: At 2021-11-05 23:27:10 +0000 UTC - event for affinity-nodeport-timeout: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-timeout-vjfr2 Nov 5 23:29:38.743: INFO: At 2021-11-05 23:27:10 +0000 UTC - event for affinity-nodeport-timeout: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-timeout-pz5p8 Nov 5 23:29:38.743: INFO: At 2021-11-05 23:27:10 +0000 UTC - event for affinity-nodeport-timeout: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-timeout-qsgkj Nov 5 23:29:38.743: INFO: At 2021-11-05 23:27:10 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node2} Killing: Stopping container agnhost-container Nov 5 23:29:38.743: INFO: At 2021-11-05 23:27:11 +0000 UTC - event for affinity-nodeport-timeout-pz5p8: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Nov 5 23:29:38.743: INFO: At 2021-11-05 23:27:12 +0000 UTC - event for affinity-nodeport-timeout-pz5p8: {kubelet node1} Started: Started container affinity-nodeport-timeout Nov 5 23:29:38.743: INFO: At 2021-11-05 23:27:12 +0000 UTC - event for affinity-nodeport-timeout-pz5p8: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 369.091944ms Nov 5 23:29:38.743: INFO: At 2021-11-05 23:27:12 +0000 UTC - event for affinity-nodeport-timeout-pz5p8: {kubelet node1} Created: Created container affinity-nodeport-timeout Nov 5 23:29:38.743: INFO: At 2021-11-05 23:27:12 +0000 UTC - event for affinity-nodeport-timeout-qsgkj: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Nov 5 23:29:38.743: INFO: At 2021-11-05 23:27:12 +0000 UTC - event for affinity-nodeport-timeout-vjfr2: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Nov 5 23:29:38.743: INFO: At 2021-11-05 23:27:13 +0000 UTC - event for affinity-nodeport-timeout-qsgkj: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 724.932521ms Nov 5 23:29:38.743: INFO: At 2021-11-05 23:27:13 +0000 UTC - event for affinity-nodeport-timeout-qsgkj: {kubelet node2} Started: Started container affinity-nodeport-timeout Nov 5 23:29:38.743: INFO: At 2021-11-05 23:27:13 +0000 UTC - event for affinity-nodeport-timeout-qsgkj: {kubelet node2} Created: Created container affinity-nodeport-timeout Nov 5 23:29:38.743: INFO: At 2021-11-05 23:27:13 +0000 UTC - event for affinity-nodeport-timeout-vjfr2: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 322.757776ms Nov 5 23:29:38.743: INFO: At 2021-11-05 23:27:13 +0000 UTC - event for affinity-nodeport-timeout-vjfr2: {kubelet node2} Created: Created container affinity-nodeport-timeout Nov 5 23:29:38.743: INFO: At 2021-11-05 23:27:13 +0000 UTC - event for affinity-nodeport-timeout-vjfr2: {kubelet node2} Started: Started container affinity-nodeport-timeout Nov 5 23:29:38.743: INFO: At 2021-11-05 23:27:17 +0000 UTC - event for execpod-affinityxz4jp: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Nov 5 23:29:38.743: INFO: At 2021-11-05 23:27:18 +0000 UTC - event for execpod-affinityxz4jp: {kubelet node2} Started: Started container agnhost-container Nov 5 23:29:38.743: INFO: At 2021-11-05 23:27:18 +0000 UTC - event for execpod-affinityxz4jp: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 315.808233ms Nov 5 23:29:38.743: INFO: At 2021-11-05 23:27:18 +0000 UTC - event for execpod-affinityxz4jp: {kubelet node2} Created: Created container agnhost-container Nov 5 23:29:38.743: INFO: At 2021-11-05 23:29:22 +0000 UTC - event for execpod-affinityxz4jp: {kubelet node2} Killing: Stopping container agnhost-container Nov 5 23:29:38.743: INFO: At 2021-11-05 23:29:23 +0000 UTC - event for affinity-nodeport-timeout-pz5p8: {kubelet node1} Killing: Stopping container affinity-nodeport-timeout Nov 5 23:29:38.743: INFO: At 2021-11-05 23:29:23 +0000 UTC - event for affinity-nodeport-timeout-qsgkj: {kubelet node2} Killing: Stopping container affinity-nodeport-timeout Nov 5 23:29:38.743: INFO: At 2021-11-05 23:29:23 +0000 UTC - event for affinity-nodeport-timeout-vjfr2: {kubelet node2} Killing: Stopping container affinity-nodeport-timeout Nov 5 23:29:38.745: INFO: POD NODE PHASE GRACE CONDITIONS Nov 5 23:29:38.745: INFO: Nov 5 23:29:38.750: INFO: Logging node info for node master1 Nov 5 23:29:38.753: INFO: Node Info: &Node{ObjectMeta:{master1 acabf68f-e6fa-4376-87a7-953399a106b3 48601 0 2021-11-05 20:58:52 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-11-05 20:58:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-11-05 21:01:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-11-05 21:01:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-11-05 21:06:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-05 21:04:29 +0000 UTC,LastTransitionTime:2021-11-05 21:04:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-05 23:29:29 +0000 UTC,LastTransitionTime:2021-11-05 20:58:50 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-05 23:29:29 +0000 UTC,LastTransitionTime:2021-11-05 20:58:50 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-05 23:29:29 +0000 UTC,LastTransitionTime:2021-11-05 20:58:50 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-05 23:29:29 +0000 UTC,LastTransitionTime:2021-11-05 21:01:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b66bbe4d404942179ce344aa1da0c494,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:b59c0f0e-9c14-460c-acfa-6e83037bd04e,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724468800,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:50c0d65dc6b2d7618e01dd2fb431c2312ad7490d0461d040b112b04809b07401 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:2012644549052fa07c43b0d19f320c871a25e105d0b23e33645e4f1bcf8fcd97 nginx:1.20.1-alpine],SizeBytes:22650454,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 5 23:29:38.753: INFO: Logging kubelet events for node master1 Nov 5 23:29:38.755: INFO: Logging pods the kubelet thinks is on node master1 Nov 5 23:29:38.775: INFO: kube-proxy-r4cf7 started at 2021-11-05 21:00:42 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:38.775: INFO: Container kube-proxy ready: true, restart count 1 Nov 5 23:29:38.775: INFO: kube-multus-ds-amd64-rr699 started at 2021-11-05 21:01:44 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:38.775: INFO: Container kube-multus ready: true, restart count 1 Nov 5 23:29:38.775: INFO: container-registry-65d7c44b96-dwrs5 started at 2021-11-05 21:06:01 +0000 UTC (0+2 container statuses recorded) Nov 5 23:29:38.775: INFO: Container docker-registry ready: true, restart count 0 Nov 5 23:29:38.775: INFO: Container nginx ready: true, restart count 0 Nov 5 23:29:38.775: INFO: kube-apiserver-master1 started at 2021-11-05 21:00:02 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:38.775: INFO: Container kube-apiserver ready: true, restart count 0 Nov 5 23:29:38.775: INFO: kube-controller-manager-master1 started at 2021-11-05 21:00:02 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:38.775: INFO: Container kube-controller-manager ready: true, restart count 3 Nov 5 23:29:38.775: INFO: kube-scheduler-master1 started at 2021-11-05 21:00:02 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:38.775: INFO: Container kube-scheduler ready: true, restart count 0 Nov 5 23:29:38.775: INFO: kube-flannel-hkkhj started at 2021-11-05 21:01:36 +0000 UTC (1+1 container statuses recorded) Nov 5 23:29:38.775: INFO: Init container install-cni ready: true, restart count 2 Nov 5 23:29:38.775: INFO: Container kube-flannel ready: true, restart count 2 Nov 5 23:29:38.775: INFO: coredns-8474476ff8-nq2jw started at 2021-11-05 21:02:14 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:38.775: INFO: Container coredns ready: true, restart count 2 Nov 5 23:29:38.775: INFO: node-exporter-lgdzv started at 2021-11-05 21:14:48 +0000 UTC (0+2 container statuses recorded) Nov 5 23:29:38.775: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 5 23:29:38.775: INFO: Container node-exporter ready: true, restart count 0 W1105 23:29:38.790428 39 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 5 23:29:38.866: INFO: Latency metrics for node master1 Nov 5 23:29:38.866: INFO: Logging node info for node master2 Nov 5 23:29:38.869: INFO: Node Info: &Node{ObjectMeta:{master2 004d4571-8588-4d18-93d0-ad0af4174866 48660 0 2021-11-05 20:59:23 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-11-05 20:59:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2021-11-05 21:00:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2021-11-05 21:01:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-11-05 21:09:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2021-11-05 21:09:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-05 21:04:41 +0000 UTC,LastTransitionTime:2021-11-05 21:04:41 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-05 23:29:31 +0000 UTC,LastTransitionTime:2021-11-05 20:59:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-05 23:29:31 +0000 UTC,LastTransitionTime:2021-11-05 20:59:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-05 23:29:31 +0000 UTC,LastTransitionTime:2021-11-05 20:59:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-05 23:29:31 +0000 UTC,LastTransitionTime:2021-11-05 21:01:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:0f1bc4b4acc1463992265eb9f006d2f4,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:d0e797a3-7d35-4e63-b584-b18006ef67fe,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724468800,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 5 23:29:38.870: INFO: Logging kubelet events for node master2 Nov 5 23:29:38.873: INFO: Logging pods the kubelet thinks is on node master2 Nov 5 23:29:38.881: INFO: kube-scheduler-master2 started at 2021-11-05 21:08:18 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:38.881: INFO: Container kube-scheduler ready: true, restart count 3 Nov 5 23:29:38.881: INFO: kube-multus-ds-amd64-m5646 started at 2021-11-05 21:01:44 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:38.881: INFO: Container kube-multus ready: true, restart count 1 Nov 5 23:29:38.881: INFO: node-feature-discovery-controller-cff799f9f-8cg9j started at 2021-11-05 21:09:34 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:38.881: INFO: Container nfd-controller ready: true, restart count 0 Nov 5 23:29:38.881: INFO: node-exporter-8mxjv started at 2021-11-05 21:14:48 +0000 UTC (0+2 container statuses recorded) Nov 5 23:29:38.881: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 5 23:29:38.881: INFO: Container node-exporter ready: true, restart count 0 Nov 5 23:29:38.881: INFO: kube-apiserver-master2 started at 2021-11-05 21:00:02 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:38.881: INFO: Container kube-apiserver ready: true, restart count 0 Nov 5 23:29:38.881: INFO: kube-proxy-9vm9v started at 2021-11-05 21:00:42 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:38.881: INFO: Container kube-proxy ready: true, restart count 1 Nov 5 23:29:38.881: INFO: kube-flannel-g7q4k started at 2021-11-05 21:01:36 +0000 UTC (1+1 container statuses recorded) Nov 5 23:29:38.881: INFO: Init container install-cni ready: true, restart count 0 Nov 5 23:29:38.881: INFO: Container kube-flannel ready: true, restart count 3 Nov 5 23:29:38.881: INFO: kube-controller-manager-master2 started at 2021-11-05 21:04:18 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:38.881: INFO: Container kube-controller-manager ready: true, restart count 2 W1105 23:29:38.897130 39 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 5 23:29:38.969: INFO: Latency metrics for node master2 Nov 5 23:29:38.969: INFO: Logging node info for node master3 Nov 5 23:29:38.971: INFO: Node Info: &Node{ObjectMeta:{master3 d3395dfc-1d8f-4527-88b4-7f472f6a6c0f 48783 0 2021-11-05 20:59:34 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-11-05 20:59:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-11-05 21:01:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-11-05 21:01:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-11-05 21:12:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-05 21:04:26 +0000 UTC,LastTransitionTime:2021-11-05 21:04:26 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-05 23:29:36 +0000 UTC,LastTransitionTime:2021-11-05 20:59:34 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-05 23:29:36 +0000 UTC,LastTransitionTime:2021-11-05 20:59:34 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-05 23:29:36 +0000 UTC,LastTransitionTime:2021-11-05 20:59:34 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-05 23:29:36 +0000 UTC,LastTransitionTime:2021-11-05 21:04:19 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:006015d4e2a7441aa293fbb9db938e38,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:a0f65291-184f-4994-a7ea-d1a5b4d71ffa,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724468800,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 5 23:29:38.971: INFO: Logging kubelet events for node master3 Nov 5 23:29:38.974: INFO: Logging pods the kubelet thinks is on node master3 Nov 5 23:29:38.983: INFO: kube-flannel-f55xz started at 2021-11-05 21:01:36 +0000 UTC (1+1 container statuses recorded) Nov 5 23:29:38.983: INFO: Init container install-cni ready: true, restart count 0 Nov 5 23:29:38.983: INFO: Container kube-flannel ready: true, restart count 1 Nov 5 23:29:38.983: INFO: coredns-8474476ff8-qbn9j started at 2021-11-05 21:02:10 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:38.983: INFO: Container coredns ready: true, restart count 1 Nov 5 23:29:38.983: INFO: node-exporter-mqcvx started at 2021-11-05 21:14:48 +0000 UTC (0+2 container statuses recorded) Nov 5 23:29:38.983: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 5 23:29:38.983: INFO: Container node-exporter ready: true, restart count 0 Nov 5 23:29:38.983: INFO: kube-apiserver-master3 started at 2021-11-05 21:04:19 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:38.983: INFO: Container kube-apiserver ready: true, restart count 0 Nov 5 23:29:38.983: INFO: kube-controller-manager-master3 started at 2021-11-05 21:00:02 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:38.983: INFO: Container kube-controller-manager ready: true, restart count 2 Nov 5 23:29:38.983: INFO: kube-multus-ds-amd64-cp25f started at 2021-11-05 21:01:44 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:38.983: INFO: Container kube-multus ready: true, restart count 1 Nov 5 23:29:38.983: INFO: dns-autoscaler-7df78bfcfb-z9dxm started at 2021-11-05 21:02:12 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:38.983: INFO: Container autoscaler ready: true, restart count 1 Nov 5 23:29:38.983: INFO: kube-scheduler-master3 started at 2021-11-05 21:08:19 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:38.983: INFO: Container kube-scheduler ready: true, restart count 3 Nov 5 23:29:38.983: INFO: kube-proxy-s2pzt started at 2021-11-05 21:00:42 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:38.983: INFO: Container kube-proxy ready: true, restart count 2 W1105 23:29:38.997267 39 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 5 23:29:39.065: INFO: Latency metrics for node master3 Nov 5 23:29:39.065: INFO: Logging node info for node node1 Nov 5 23:29:39.087: INFO: Node Info: &Node{ObjectMeta:{node1 290b18e7-da33-4da8-b78a-8a7f28c49abf 48780 0 2021-11-05 21:00:39 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-11-05 21:00:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-11-05 21:00:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-11-05 21:01:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-11-05 21:09:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-11-05 21:13:07 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-11-05 21:13:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-05 21:04:40 +0000 UTC,LastTransitionTime:2021-11-05 21:04:40 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-05 23:29:36 +0000 UTC,LastTransitionTime:2021-11-05 21:00:39 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-05 23:29:36 +0000 UTC,LastTransitionTime:2021-11-05 21:00:39 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-05 23:29:36 +0000 UTC,LastTransitionTime:2021-11-05 21:00:39 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-05 23:29:36 +0000 UTC,LastTransitionTime:2021-11-05 21:01:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7f2fc144f1734ec29780a435d0602675,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:7c24c54c-15ba-4c20-b196-32ad0c82be71,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003432896,},ContainerImage{Names:[localhost:30500/cmk@sha256:8c85a99f0d621f4580064e48909402b99a2e98aff1c1e4164e4e52d46568c5be cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724468800,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:e30d246f33998514fb41fc34b11174e739efb54af715b53723b81cc5d2ebd29d nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[localhost:30500/tasextender@sha256:50c0d65dc6b2d7618e01dd2fb431c2312ad7490d0461d040b112b04809b07401 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:a296b4c6f6ee2b88f095b61e95c7ef4f51ba25598835b4978c9256d8c8ace48a alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 5 23:29:39.088: INFO: Logging kubelet events for node node1 Nov 5 23:29:39.090: INFO: Logging pods the kubelet thinks is on node node1 Nov 5 23:29:39.108: INFO: node-feature-discovery-worker-spmbf started at 2021-11-05 21:09:34 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:39.108: INFO: Container nfd-worker ready: true, restart count 0 Nov 5 23:29:39.108: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-l4npn started at 2021-11-05 21:10:45 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:39.108: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 5 23:29:39.108: INFO: webserver-deployment-847dcfb7fb-ch925 started at 2021-11-05 23:29:26 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:39.108: INFO: Container httpd ready: true, restart count 0 Nov 5 23:29:39.108: INFO: ss2-0 started at 2021-11-05 23:29:34 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:39.108: INFO: Container webserver ready: false, restart count 0 Nov 5 23:29:39.108: INFO: var-expansion-6088554d-bb50-477f-92c8-dc7b74a322eb started at 2021-11-05 23:28:29 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:39.108: INFO: Container dapi-container ready: false, restart count 0 Nov 5 23:29:39.108: INFO: kube-proxy-mc4cs started at 2021-11-05 21:00:42 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:39.108: INFO: Container kube-proxy ready: true, restart count 2 Nov 5 23:29:39.108: INFO: cmk-cfm9r started at 2021-11-05 21:13:47 +0000 UTC (0+2 container statuses recorded) Nov 5 23:29:39.108: INFO: Container nodereport ready: true, restart count 0 Nov 5 23:29:39.108: INFO: Container reconcile ready: true, restart count 0 Nov 5 23:29:39.108: INFO: prometheus-k8s-0 started at 2021-11-05 21:14:58 +0000 UTC (0+4 container statuses recorded) Nov 5 23:29:39.108: INFO: Container config-reloader ready: true, restart count 0 Nov 5 23:29:39.108: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Nov 5 23:29:39.108: INFO: Container grafana ready: true, restart count 0 Nov 5 23:29:39.108: INFO: Container prometheus ready: true, restart count 1 Nov 5 23:29:39.108: INFO: liveness-58ae240d-5759-470f-be6e-c54592abc01a started at 2021-11-05 23:28:35 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:39.108: INFO: Container agnhost-container ready: true, restart count 0 Nov 5 23:29:39.108: INFO: execpod-affinitywtzlg started at 2021-11-05 23:28:14 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:39.108: INFO: Container agnhost-container ready: true, restart count 0 Nov 5 23:29:39.108: INFO: kube-flannel-hxwks started at 2021-11-05 21:01:36 +0000 UTC (1+1 container statuses recorded) Nov 5 23:29:39.108: INFO: Init container install-cni ready: true, restart count 2 Nov 5 23:29:39.108: INFO: Container kube-flannel ready: true, restart count 3 Nov 5 23:29:39.108: INFO: webserver-deployment-847dcfb7fb-gsfvl started at 2021-11-05 23:29:26 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:39.108: INFO: Container httpd ready: true, restart count 0 Nov 5 23:29:39.108: INFO: test-webserver-e71c8083-eaeb-4cb0-956a-7b0efb4178ab started at 2021-11-05 23:27:29 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:39.108: INFO: Container test-webserver ready: true, restart count 0 Nov 5 23:29:39.108: INFO: webserver-deployment-847dcfb7fb-nzjlr started at 2021-11-05 23:29:26 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:39.108: INFO: Container httpd ready: true, restart count 0 Nov 5 23:29:39.108: INFO: collectd-5k6s9 started at 2021-11-05 21:18:40 +0000 UTC (0+3 container statuses recorded) Nov 5 23:29:39.108: INFO: Container collectd ready: true, restart count 0 Nov 5 23:29:39.108: INFO: Container collectd-exporter ready: true, restart count 0 Nov 5 23:29:39.108: INFO: Container rbac-proxy ready: true, restart count 0 Nov 5 23:29:39.108: INFO: nginx-proxy-node1 started at 2021-11-05 21:00:39 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:39.108: INFO: Container nginx-proxy ready: true, restart count 2 Nov 5 23:29:39.108: INFO: kube-multus-ds-amd64-mqrl8 started at 2021-11-05 21:01:44 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:39.108: INFO: Container kube-multus ready: true, restart count 1 Nov 5 23:29:39.108: INFO: kubernetes-dashboard-785dcbb76d-9wtdz started at 2021-11-05 21:02:14 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:39.108: INFO: Container kubernetes-dashboard ready: true, restart count 1 Nov 5 23:29:39.108: INFO: cmk-init-discover-node1-nnkks started at 2021-11-05 21:13:04 +0000 UTC (0+3 container statuses recorded) Nov 5 23:29:39.108: INFO: Container discover ready: false, restart count 0 Nov 5 23:29:39.108: INFO: Container init ready: false, restart count 0 Nov 5 23:29:39.108: INFO: Container install ready: false, restart count 0 Nov 5 23:29:39.108: INFO: cmk-webhook-6c9d5f8578-wq5mk started at 2021-11-05 21:13:47 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:39.108: INFO: Container cmk-webhook ready: true, restart count 0 Nov 5 23:29:39.108: INFO: node-exporter-fvksz started at 2021-11-05 21:14:48 +0000 UTC (0+2 container statuses recorded) Nov 5 23:29:39.108: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 5 23:29:39.108: INFO: Container node-exporter ready: true, restart count 0 Nov 5 23:29:39.108: INFO: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s started at 2021-11-05 21:17:51 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:39.108: INFO: Container tas-extender ready: true, restart count 0 Nov 5 23:29:39.108: INFO: affinity-nodeport-transition-wrj2s started at 2021-11-05 23:28:08 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:39.108: INFO: Container affinity-nodeport-transition ready: true, restart count 0 Nov 5 23:29:39.108: INFO: webserver-deployment-847dcfb7fb-n766s started at 2021-11-05 23:29:26 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:39.108: INFO: Container httpd ready: true, restart count 0 Nov 5 23:29:39.108: INFO: webserver-deployment-847dcfb7fb-qgzx2 started at 2021-11-05 23:29:26 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:39.108: INFO: Container httpd ready: true, restart count 0 W1105 23:29:39.124058 39 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 5 23:29:39.456: INFO: Latency metrics for node node1 Nov 5 23:29:39.456: INFO: Logging node info for node node2 Nov 5 23:29:39.461: INFO: Node Info: &Node{ObjectMeta:{node2 7d7e71f0-82d7-49ba-b69a-56600dd59b3f 48738 0 2021-11-05 21:00:39 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-11-05 21:00:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-11-05 21:00:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-11-05 21:01:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-11-05 21:09:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-11-05 21:13:29 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-11-05 21:13:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-05 21:04:43 +0000 UTC,LastTransitionTime:2021-11-05 21:04:43 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-05 23:29:35 +0000 UTC,LastTransitionTime:2021-11-05 21:00:39 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-05 23:29:35 +0000 UTC,LastTransitionTime:2021-11-05 21:00:39 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-05 23:29:35 +0000 UTC,LastTransitionTime:2021-11-05 21:00:39 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-05 23:29:35 +0000 UTC,LastTransitionTime:2021-11-05 21:01:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:415d65c0f8484c488059b324e675b5bd,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:c5482a76-3a9a-45bb-ab12-c74550bfe71f,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:724468800,},ContainerImage{Names:[localhost:30500/cmk@sha256:8c85a99f0d621f4580064e48909402b99a2e98aff1c1e4164e4e52d46568c5be localhost:30500/cmk:v1.5.1],SizeBytes:724468800,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:e30d246f33998514fb41fc34b11174e739efb54af715b53723b81cc5d2ebd29d localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 5 23:29:39.462: INFO: Logging kubelet events for node node2 Nov 5 23:29:39.465: INFO: Logging pods the kubelet thinks is on node node2 Nov 5 23:29:39.481: INFO: nginx-proxy-node2 started at 2021-11-05 21:00:39 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:39.481: INFO: Container nginx-proxy ready: true, restart count 2 Nov 5 23:29:39.481: INFO: node-feature-discovery-worker-pn6cr started at 2021-11-05 21:09:34 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:39.481: INFO: Container nfd-worker ready: true, restart count 0 Nov 5 23:29:39.481: INFO: webserver-deployment-795d758f88-9dvw8 started at 2021-11-05 23:29:38 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:39.481: INFO: Container httpd ready: false, restart count 0 Nov 5 23:29:39.481: INFO: cmk-init-discover-node2-9svdd started at 2021-11-05 21:13:24 +0000 UTC (0+3 container statuses recorded) Nov 5 23:29:39.481: INFO: Container discover ready: false, restart count 0 Nov 5 23:29:39.481: INFO: Container init ready: false, restart count 0 Nov 5 23:29:39.481: INFO: Container install ready: false, restart count 0 Nov 5 23:29:39.481: INFO: node-exporter-k7p79 started at 2021-11-05 21:14:48 +0000 UTC (0+2 container statuses recorded) Nov 5 23:29:39.481: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 5 23:29:39.481: INFO: Container node-exporter ready: true, restart count 0 Nov 5 23:29:39.481: INFO: kubernetes-metrics-scraper-5558854cb-v9vgg started at 2021-11-05 21:02:14 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:39.481: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Nov 5 23:29:39.481: INFO: affinity-nodeport-transition-dbbbd started at 2021-11-05 23:28:08 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:39.481: INFO: Container affinity-nodeport-transition ready: true, restart count 0 Nov 5 23:29:39.481: INFO: webserver-deployment-795d758f88-8qqn2 started at (0+0 container statuses recorded) Nov 5 23:29:39.481: INFO: ss2-2 started at 2021-11-05 23:29:04 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:39.481: INFO: Container webserver ready: true, restart count 0 Nov 5 23:29:39.481: INFO: kube-flannel-cqj7j started at 2021-11-05 21:01:36 +0000 UTC (1+1 container statuses recorded) Nov 5 23:29:39.481: INFO: Init container install-cni ready: true, restart count 1 Nov 5 23:29:39.481: INFO: Container kube-flannel ready: true, restart count 2 Nov 5 23:29:39.481: INFO: affinity-nodeport-transition-c9bcs started at 2021-11-05 23:28:08 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:39.481: INFO: Container affinity-nodeport-transition ready: true, restart count 0 Nov 5 23:29:39.481: INFO: test-webserver-90a45a64-d28b-4d0a-a780-0078e465d503 started at 2021-11-05 23:29:00 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:39.481: INFO: Container test-webserver ready: false, restart count 0 Nov 5 23:29:39.481: INFO: webserver-deployment-847dcfb7fb-qtvlh started at 2021-11-05 23:29:26 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:39.481: INFO: Container httpd ready: true, restart count 0 Nov 5 23:29:39.481: INFO: webserver-deployment-795d758f88-c2c2w started at (0+0 container statuses recorded) Nov 5 23:29:39.481: INFO: kube-proxy-j9lmg started at 2021-11-05 21:00:42 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:39.481: INFO: Container kube-proxy ready: true, restart count 2 Nov 5 23:29:39.481: INFO: webserver-deployment-795d758f88-pxph4 started at 2021-11-05 23:29:38 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:39.481: INFO: Container httpd ready: false, restart count 0 Nov 5 23:29:39.481: INFO: webserver-deployment-847dcfb7fb-tvbwk started at 2021-11-05 23:29:26 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:39.481: INFO: Container httpd ready: true, restart count 0 Nov 5 23:29:39.481: INFO: webserver-deployment-795d758f88-h5zmt started at 2021-11-05 23:29:38 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:39.481: INFO: Container httpd ready: false, restart count 0 Nov 5 23:29:39.481: INFO: kube-multus-ds-amd64-p7bxx started at 2021-11-05 21:01:44 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:39.481: INFO: Container kube-multus ready: true, restart count 1 Nov 5 23:29:39.481: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-tzh4p started at 2021-11-05 21:10:45 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:39.481: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 5 23:29:39.481: INFO: collectd-r2g57 started at 2021-11-05 21:18:40 +0000 UTC (0+3 container statuses recorded) Nov 5 23:29:39.481: INFO: Container collectd ready: true, restart count 0 Nov 5 23:29:39.481: INFO: Container collectd-exporter ready: true, restart count 0 Nov 5 23:29:39.481: INFO: Container rbac-proxy ready: true, restart count 0 Nov 5 23:29:39.481: INFO: ss2-1 started at 2021-11-05 23:29:19 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:39.481: INFO: Container webserver ready: true, restart count 0 Nov 5 23:29:39.481: INFO: webserver-deployment-847dcfb7fb-4hlfm started at 2021-11-05 23:29:26 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:39.481: INFO: Container httpd ready: true, restart count 0 Nov 5 23:29:39.481: INFO: cmk-bnvd2 started at 2021-11-05 21:13:46 +0000 UTC (0+2 container statuses recorded) Nov 5 23:29:39.481: INFO: Container nodereport ready: true, restart count 0 Nov 5 23:29:39.481: INFO: Container reconcile ready: true, restart count 0 Nov 5 23:29:39.481: INFO: prometheus-operator-585ccfb458-vh55q started at 2021-11-05 21:14:41 +0000 UTC (0+2 container statuses recorded) Nov 5 23:29:39.481: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 5 23:29:39.481: INFO: Container prometheus-operator ready: true, restart count 0 Nov 5 23:29:39.481: INFO: var-expansion-7be6cb66-0c8b-4c17-9505-4e8023efdcc0 started at 2021-11-05 23:29:31 +0000 UTC (0+1 container statuses recorded) Nov 5 23:29:39.481: INFO: Container dapi-container ready: false, restart count 0 W1105 23:29:39.494664 39 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 5 23:29:41.421: INFO: Latency metrics for node node2 Nov 5 23:29:41.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-637" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [155.484 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 5 23:29:22.951: Unexpected error: <*errors.errorString | 0xc0025be8d0>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30904 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30904 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2493 ------------------------------ {"msg":"FAILED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":12,"skipped":312,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} S ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:29:31.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 5 23:29:37.097: INFO: Deleting pod "var-expansion-7be6cb66-0c8b-4c17-9505-4e8023efdcc0" in namespace "var-expansion-6786" Nov 5 23:29:37.102: INFO: Wait up to 5m0s for pod "var-expansion-7be6cb66-0c8b-4c17-9505-4e8023efdcc0" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:29:45.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6786" for this suite. • [SLOW TEST:14.064 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should fail substituting values in a volume subpath with backticks [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","total":-1,"completed":20,"skipped":335,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:29:37.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Nov 5 23:29:37.053: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3315 15fd4f60-e004-45b1-a0dd-5122cdd39c65 48790 0 2021-11-05 23:29:37 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-11-05 23:29:37 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Nov 5 23:29:37.053: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3315 15fd4f60-e004-45b1-a0dd-5122cdd39c65 48791 0 2021-11-05 23:29:37 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-11-05 23:29:37 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Nov 5 23:29:37.055: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3315 15fd4f60-e004-45b1-a0dd-5122cdd39c65 48792 0 2021-11-05 23:29:37 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-11-05 23:29:37 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Nov 5 23:29:47.073: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3315 15fd4f60-e004-45b1-a0dd-5122cdd39c65 49219 0 2021-11-05 23:29:37 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-11-05 23:29:37 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Nov 5 23:29:47.073: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3315 15fd4f60-e004-45b1-a0dd-5122cdd39c65 49220 0 2021-11-05 23:29:37 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-11-05 23:29:37 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Nov 5 23:29:47.073: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3315 15fd4f60-e004-45b1-a0dd-5122cdd39c65 49221 0 2021-11-05 23:29:37 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-11-05 23:29:37 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:29:47.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3315" for this suite. • [SLOW TEST:10.065 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":-1,"completed":32,"skipped":494,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:29:40.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0644 on tmpfs Nov 5 23:29:40.280: INFO: Waiting up to 5m0s for pod "pod-5ae32d05-e848-4367-8d9c-e81789f4b022" in namespace "emptydir-1269" to be "Succeeded or Failed" Nov 5 23:29:40.283: INFO: Pod "pod-5ae32d05-e848-4367-8d9c-e81789f4b022": Phase="Pending", Reason="", readiness=false. Elapsed: 2.546342ms Nov 5 23:29:42.287: INFO: Pod "pod-5ae32d05-e848-4367-8d9c-e81789f4b022": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006790002s Nov 5 23:29:44.291: INFO: Pod "pod-5ae32d05-e848-4367-8d9c-e81789f4b022": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01071853s Nov 5 23:29:46.295: INFO: Pod "pod-5ae32d05-e848-4367-8d9c-e81789f4b022": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015057141s Nov 5 23:29:48.298: INFO: Pod "pod-5ae32d05-e848-4367-8d9c-e81789f4b022": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.018085124s STEP: Saw pod success Nov 5 23:29:48.298: INFO: Pod "pod-5ae32d05-e848-4367-8d9c-e81789f4b022" satisfied condition "Succeeded or Failed" Nov 5 23:29:48.301: INFO: Trying to get logs from node node1 pod pod-5ae32d05-e848-4367-8d9c-e81789f4b022 container test-container: STEP: delete the pod Nov 5 23:29:48.312: INFO: Waiting for pod pod-5ae32d05-e848-4367-8d9c-e81789f4b022 to disappear Nov 5 23:29:48.314: INFO: Pod pod-5ae32d05-e848-4367-8d9c-e81789f4b022 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:29:48.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1269" for this suite. • [SLOW TEST:8.072 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":406,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:29:41.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8519.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-8519.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8519.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8519.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-8519.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-8519.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-8519.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-8519.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8519.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8519.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-8519.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8519.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-8519.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-8519.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-8519.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-8519.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-8519.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8519.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Nov 5 23:29:49.493: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8519.svc.cluster.local from pod dns-8519/dns-test-994a67fe-fad3-4b11-886f-d10bd73ee9cf: the server could not find the requested resource (get pods dns-test-994a67fe-fad3-4b11-886f-d10bd73ee9cf) Nov 5 23:29:49.495: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8519.svc.cluster.local from pod dns-8519/dns-test-994a67fe-fad3-4b11-886f-d10bd73ee9cf: the server could not find the requested resource (get pods dns-test-994a67fe-fad3-4b11-886f-d10bd73ee9cf) Nov 5 23:29:49.498: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8519.svc.cluster.local from pod dns-8519/dns-test-994a67fe-fad3-4b11-886f-d10bd73ee9cf: the server could not find the requested resource (get pods dns-test-994a67fe-fad3-4b11-886f-d10bd73ee9cf) Nov 5 23:29:49.501: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8519.svc.cluster.local from pod dns-8519/dns-test-994a67fe-fad3-4b11-886f-d10bd73ee9cf: the server could not find the requested resource (get pods dns-test-994a67fe-fad3-4b11-886f-d10bd73ee9cf) Nov 5 23:29:49.509: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8519.svc.cluster.local from pod dns-8519/dns-test-994a67fe-fad3-4b11-886f-d10bd73ee9cf: the server could not find the requested resource (get pods dns-test-994a67fe-fad3-4b11-886f-d10bd73ee9cf) Nov 5 23:29:49.511: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8519.svc.cluster.local from pod dns-8519/dns-test-994a67fe-fad3-4b11-886f-d10bd73ee9cf: the server could not find the requested resource (get pods dns-test-994a67fe-fad3-4b11-886f-d10bd73ee9cf) Nov 5 23:29:49.515: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8519.svc.cluster.local from pod dns-8519/dns-test-994a67fe-fad3-4b11-886f-d10bd73ee9cf: the server could not find the requested resource (get pods dns-test-994a67fe-fad3-4b11-886f-d10bd73ee9cf) Nov 5 23:29:49.517: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8519.svc.cluster.local from pod dns-8519/dns-test-994a67fe-fad3-4b11-886f-d10bd73ee9cf: the server could not find the requested resource (get pods dns-test-994a67fe-fad3-4b11-886f-d10bd73ee9cf) Nov 5 23:29:49.522: INFO: Lookups using dns-8519/dns-test-994a67fe-fad3-4b11-886f-d10bd73ee9cf failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8519.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8519.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8519.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8519.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8519.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8519.svc.cluster.local jessie_udp@dns-test-service-2.dns-8519.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8519.svc.cluster.local] Nov 5 23:29:54.560: INFO: DNS probes using dns-8519/dns-test-994a67fe-fad3-4b11-886f-d10bd73ee9cf succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:29:54.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8519" for this suite. • [SLOW TEST:13.139 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":-1,"completed":13,"skipped":313,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:29:54.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:29:54.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9942" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":-1,"completed":14,"skipped":353,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:29:48.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-map-525a1e95-aa3d-4a99-b736-6bc0b2c62ac1 STEP: Creating a pod to test consume configMaps Nov 5 23:29:48.455: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1737af7c-c630-4533-a44f-3d972852c72f" in namespace "projected-5892" to be "Succeeded or Failed" Nov 5 23:29:48.461: INFO: Pod "pod-projected-configmaps-1737af7c-c630-4533-a44f-3d972852c72f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.323359ms Nov 5 23:29:50.464: INFO: Pod "pod-projected-configmaps-1737af7c-c630-4533-a44f-3d972852c72f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00956616s Nov 5 23:29:52.470: INFO: Pod "pod-projected-configmaps-1737af7c-c630-4533-a44f-3d972852c72f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014622961s Nov 5 23:29:54.473: INFO: Pod "pod-projected-configmaps-1737af7c-c630-4533-a44f-3d972852c72f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.018062178s Nov 5 23:29:56.477: INFO: Pod "pod-projected-configmaps-1737af7c-c630-4533-a44f-3d972852c72f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.022262163s STEP: Saw pod success Nov 5 23:29:56.477: INFO: Pod "pod-projected-configmaps-1737af7c-c630-4533-a44f-3d972852c72f" satisfied condition "Succeeded or Failed" Nov 5 23:29:56.480: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-1737af7c-c630-4533-a44f-3d972852c72f container agnhost-container: STEP: delete the pod Nov 5 23:29:56.504: INFO: Waiting for pod pod-projected-configmaps-1737af7c-c630-4533-a44f-3d972852c72f to disappear Nov 5 23:29:56.506: INFO: Pod pod-projected-configmaps-1737af7c-c630-4533-a44f-3d972852c72f no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:29:56.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5892" for this suite. • [SLOW TEST:8.097 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":464,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:29:45.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 5 23:29:45.196: INFO: The status of Pod server-envvars-30978b6a-312c-47c5-8d33-dadf6bb9d83e is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:29:47.204: INFO: The status of Pod server-envvars-30978b6a-312c-47c5-8d33-dadf6bb9d83e is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:29:49.199: INFO: The status of Pod server-envvars-30978b6a-312c-47c5-8d33-dadf6bb9d83e is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:29:51.200: INFO: The status of Pod server-envvars-30978b6a-312c-47c5-8d33-dadf6bb9d83e is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:29:53.199: INFO: The status of Pod server-envvars-30978b6a-312c-47c5-8d33-dadf6bb9d83e is Running (Ready = true) Nov 5 23:29:53.219: INFO: Waiting up to 5m0s for pod "client-envvars-60daec09-6696-4a24-b692-134d476ad217" in namespace "pods-1386" to be "Succeeded or Failed" Nov 5 23:29:53.221: INFO: Pod "client-envvars-60daec09-6696-4a24-b692-134d476ad217": Phase="Pending", Reason="", readiness=false. Elapsed: 2.499935ms Nov 5 23:29:55.224: INFO: Pod "client-envvars-60daec09-6696-4a24-b692-134d476ad217": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004811728s Nov 5 23:29:57.227: INFO: Pod "client-envvars-60daec09-6696-4a24-b692-134d476ad217": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008056016s STEP: Saw pod success Nov 5 23:29:57.227: INFO: Pod "client-envvars-60daec09-6696-4a24-b692-134d476ad217" satisfied condition "Succeeded or Failed" Nov 5 23:29:57.229: INFO: Trying to get logs from node node2 pod client-envvars-60daec09-6696-4a24-b692-134d476ad217 container env3cont: STEP: delete the pod Nov 5 23:29:57.284: INFO: Waiting for pod client-envvars-60daec09-6696-4a24-b692-134d476ad217 to disappear Nov 5 23:29:57.286: INFO: Pod client-envvars-60daec09-6696-4a24-b692-134d476ad217 no longer exists [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:29:57.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1386" for this suite. • [SLOW TEST:12.134 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":359,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:29:54.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-c103ad30-3bea-4ae6-b7c3-370f169188c8 STEP: Creating a pod to test consume configMaps Nov 5 23:29:54.750: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9904749c-6df7-4ba1-ad7f-2cca662b9042" in namespace "projected-3648" to be "Succeeded or Failed" Nov 5 23:29:54.753: INFO: Pod "pod-projected-configmaps-9904749c-6df7-4ba1-ad7f-2cca662b9042": Phase="Pending", Reason="", readiness=false. Elapsed: 3.577209ms Nov 5 23:29:56.757: INFO: Pod "pod-projected-configmaps-9904749c-6df7-4ba1-ad7f-2cca662b9042": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006674679s Nov 5 23:29:58.759: INFO: Pod "pod-projected-configmaps-9904749c-6df7-4ba1-ad7f-2cca662b9042": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009378204s STEP: Saw pod success Nov 5 23:29:58.759: INFO: Pod "pod-projected-configmaps-9904749c-6df7-4ba1-ad7f-2cca662b9042" satisfied condition "Succeeded or Failed" Nov 5 23:29:58.762: INFO: Trying to get logs from node node1 pod pod-projected-configmaps-9904749c-6df7-4ba1-ad7f-2cca662b9042 container agnhost-container: STEP: delete the pod Nov 5 23:29:58.773: INFO: Waiting for pod pod-projected-configmaps-9904749c-6df7-4ba1-ad7f-2cca662b9042 to disappear Nov 5 23:29:58.775: INFO: Pod pod-projected-configmaps-9904749c-6df7-4ba1-ad7f-2cca662b9042 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:29:58.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3648" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":358,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:29:00.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:30:00.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9761" for this suite. • [SLOW TEST:60.048 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":348,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:30:00.333: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap that has name configmap-test-emptyKey-ac5214f7-59e4-43c9-912c-813311830486 [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:30:00.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4101" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":-1,"completed":24,"skipped":353,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:29:56.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-81dfd596-79b6-420b-96b6-fceff3687df2 STEP: Creating a pod to test consume secrets Nov 5 23:29:56.559: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c51d7981-9fe9-4c90-a19b-48655ab19a21" in namespace "projected-7036" to be "Succeeded or Failed" Nov 5 23:29:56.563: INFO: Pod "pod-projected-secrets-c51d7981-9fe9-4c90-a19b-48655ab19a21": Phase="Pending", Reason="", readiness=false. Elapsed: 3.559741ms Nov 5 23:29:58.567: INFO: Pod "pod-projected-secrets-c51d7981-9fe9-4c90-a19b-48655ab19a21": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007577669s Nov 5 23:30:00.570: INFO: Pod "pod-projected-secrets-c51d7981-9fe9-4c90-a19b-48655ab19a21": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010571708s STEP: Saw pod success Nov 5 23:30:00.570: INFO: Pod "pod-projected-secrets-c51d7981-9fe9-4c90-a19b-48655ab19a21" satisfied condition "Succeeded or Failed" Nov 5 23:30:00.573: INFO: Trying to get logs from node node2 pod pod-projected-secrets-c51d7981-9fe9-4c90-a19b-48655ab19a21 container projected-secret-volume-test: STEP: delete the pod Nov 5 23:30:00.648: INFO: Waiting for pod pod-projected-secrets-c51d7981-9fe9-4c90-a19b-48655ab19a21 to disappear Nov 5 23:30:00.649: INFO: Pod pod-projected-secrets-c51d7981-9fe9-4c90-a19b-48655ab19a21 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:30:00.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7036" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":468,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:29:57.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting the auto-created API token STEP: reading a file in the container Nov 5 23:30:01.918: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4125 pod-service-account-e162fe32-00ec-41e0-a28b-3557c16d88f7 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Nov 5 23:30:02.381: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4125 pod-service-account-e162fe32-00ec-41e0-a28b-3557c16d88f7 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Nov 5 23:30:02.859: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4125 pod-service-account-e162fe32-00ec-41e0-a28b-3557c16d88f7 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:30:03.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-4125" for this suite. • [SLOW TEST:5.750 seconds] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":-1,"completed":22,"skipped":405,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:30:03.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-upd-7ee90da5-3023-4d8d-be4e-7fbfd136ef36 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:30:09.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-612" for this suite. • [SLOW TEST:6.094 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":407,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:30:09.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-c2fe0aba-8ab3-4a97-9d94-e706372b38c9 STEP: Creating a pod to test consume configMaps Nov 5 23:30:09.287: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b5c74278-302f-4a45-8a33-a8c217b5e80d" in namespace "projected-8439" to be "Succeeded or Failed" Nov 5 23:30:09.289: INFO: Pod "pod-projected-configmaps-b5c74278-302f-4a45-8a33-a8c217b5e80d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031945ms Nov 5 23:30:11.292: INFO: Pod "pod-projected-configmaps-b5c74278-302f-4a45-8a33-a8c217b5e80d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005160222s Nov 5 23:30:13.295: INFO: Pod "pod-projected-configmaps-b5c74278-302f-4a45-8a33-a8c217b5e80d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007532409s STEP: Saw pod success Nov 5 23:30:13.295: INFO: Pod "pod-projected-configmaps-b5c74278-302f-4a45-8a33-a8c217b5e80d" satisfied condition "Succeeded or Failed" Nov 5 23:30:13.297: INFO: Trying to get logs from node node1 pod pod-projected-configmaps-b5c74278-302f-4a45-8a33-a8c217b5e80d container projected-configmap-volume-test: STEP: delete the pod Nov 5 23:30:13.310: INFO: Waiting for pod pod-projected-configmaps-b5c74278-302f-4a45-8a33-a8c217b5e80d to disappear Nov 5 23:30:13.311: INFO: Pod pod-projected-configmaps-b5c74278-302f-4a45-8a33-a8c217b5e80d no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:30:13.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8439" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":423,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} S ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:29:47.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-secret-jvgg STEP: Creating a pod to test atomic-volume-subpath Nov 5 23:29:47.189: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-jvgg" in namespace "subpath-5526" to be "Succeeded or Failed" Nov 5 23:29:47.192: INFO: Pod "pod-subpath-test-secret-jvgg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.406142ms Nov 5 23:29:49.196: INFO: Pod "pod-subpath-test-secret-jvgg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00640842s Nov 5 23:29:51.200: INFO: Pod "pod-subpath-test-secret-jvgg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010143171s Nov 5 23:29:53.202: INFO: Pod "pod-subpath-test-secret-jvgg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012662369s Nov 5 23:29:55.206: INFO: Pod "pod-subpath-test-secret-jvgg": Phase="Pending", Reason="", readiness=false. Elapsed: 8.016302665s Nov 5 23:29:57.209: INFO: Pod "pod-subpath-test-secret-jvgg": Phase="Running", Reason="", readiness=true. Elapsed: 10.019764447s Nov 5 23:29:59.213: INFO: Pod "pod-subpath-test-secret-jvgg": Phase="Running", Reason="", readiness=true. Elapsed: 12.023367432s Nov 5 23:30:01.217: INFO: Pod "pod-subpath-test-secret-jvgg": Phase="Running", Reason="", readiness=true. Elapsed: 14.027723437s Nov 5 23:30:03.220: INFO: Pod "pod-subpath-test-secret-jvgg": Phase="Running", Reason="", readiness=true. Elapsed: 16.030916116s Nov 5 23:30:05.224: INFO: Pod "pod-subpath-test-secret-jvgg": Phase="Running", Reason="", readiness=true. Elapsed: 18.034397966s Nov 5 23:30:07.228: INFO: Pod "pod-subpath-test-secret-jvgg": Phase="Running", Reason="", readiness=true. Elapsed: 20.038454327s Nov 5 23:30:09.231: INFO: Pod "pod-subpath-test-secret-jvgg": Phase="Running", Reason="", readiness=true. Elapsed: 22.041968414s Nov 5 23:30:11.236: INFO: Pod "pod-subpath-test-secret-jvgg": Phase="Running", Reason="", readiness=true. Elapsed: 24.046996074s Nov 5 23:30:13.240: INFO: Pod "pod-subpath-test-secret-jvgg": Phase="Running", Reason="", readiness=true. Elapsed: 26.050498878s Nov 5 23:30:15.245: INFO: Pod "pod-subpath-test-secret-jvgg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.055332114s STEP: Saw pod success Nov 5 23:30:15.245: INFO: Pod "pod-subpath-test-secret-jvgg" satisfied condition "Succeeded or Failed" Nov 5 23:30:15.248: INFO: Trying to get logs from node node2 pod pod-subpath-test-secret-jvgg container test-container-subpath-secret-jvgg: STEP: delete the pod Nov 5 23:30:15.280: INFO: Waiting for pod pod-subpath-test-secret-jvgg to disappear Nov 5 23:30:15.282: INFO: Pod pod-subpath-test-secret-jvgg no longer exists STEP: Deleting pod pod-subpath-test-secret-jvgg Nov 5 23:30:15.282: INFO: Deleting pod "pod-subpath-test-secret-jvgg" in namespace "subpath-5526" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:30:15.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5526" for this suite. • [SLOW TEST:28.145 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":-1,"completed":33,"skipped":529,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:29:58.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Nov 5 23:29:58.811: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying pod deletion was observed [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:30:18.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4955" for this suite. • [SLOW TEST:19.971 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":360,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:30:00.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-downwardapi-ttmt STEP: Creating a pod to test atomic-volume-subpath Nov 5 23:30:00.417: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-ttmt" in namespace "subpath-8147" to be "Succeeded or Failed" Nov 5 23:30:00.421: INFO: Pod "pod-subpath-test-downwardapi-ttmt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.144292ms Nov 5 23:30:02.425: INFO: Pod "pod-subpath-test-downwardapi-ttmt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00789506s Nov 5 23:30:04.429: INFO: Pod "pod-subpath-test-downwardapi-ttmt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012134458s Nov 5 23:30:06.433: INFO: Pod "pod-subpath-test-downwardapi-ttmt": Phase="Running", Reason="", readiness=true. Elapsed: 6.0163944s Nov 5 23:30:08.436: INFO: Pod "pod-subpath-test-downwardapi-ttmt": Phase="Running", Reason="", readiness=true. Elapsed: 8.019496188s Nov 5 23:30:10.440: INFO: Pod "pod-subpath-test-downwardapi-ttmt": Phase="Running", Reason="", readiness=true. Elapsed: 10.023024861s Nov 5 23:30:12.444: INFO: Pod "pod-subpath-test-downwardapi-ttmt": Phase="Running", Reason="", readiness=true. Elapsed: 12.027202673s Nov 5 23:30:14.448: INFO: Pod "pod-subpath-test-downwardapi-ttmt": Phase="Running", Reason="", readiness=true. Elapsed: 14.031008399s Nov 5 23:30:16.451: INFO: Pod "pod-subpath-test-downwardapi-ttmt": Phase="Running", Reason="", readiness=true. Elapsed: 16.034548255s Nov 5 23:30:18.456: INFO: Pod "pod-subpath-test-downwardapi-ttmt": Phase="Running", Reason="", readiness=true. Elapsed: 18.039154407s Nov 5 23:30:20.460: INFO: Pod "pod-subpath-test-downwardapi-ttmt": Phase="Running", Reason="", readiness=true. Elapsed: 20.042752595s Nov 5 23:30:22.462: INFO: Pod "pod-subpath-test-downwardapi-ttmt": Phase="Running", Reason="", readiness=true. Elapsed: 22.045550052s Nov 5 23:30:24.466: INFO: Pod "pod-subpath-test-downwardapi-ttmt": Phase="Running", Reason="", readiness=true. Elapsed: 24.048734948s Nov 5 23:30:26.470: INFO: Pod "pod-subpath-test-downwardapi-ttmt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.052935083s STEP: Saw pod success Nov 5 23:30:26.470: INFO: Pod "pod-subpath-test-downwardapi-ttmt" satisfied condition "Succeeded or Failed" Nov 5 23:30:26.472: INFO: Trying to get logs from node node2 pod pod-subpath-test-downwardapi-ttmt container test-container-subpath-downwardapi-ttmt: STEP: delete the pod Nov 5 23:30:26.490: INFO: Waiting for pod pod-subpath-test-downwardapi-ttmt to disappear Nov 5 23:30:26.493: INFO: Pod pod-subpath-test-downwardapi-ttmt no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-ttmt Nov 5 23:30:26.493: INFO: Deleting pod "pod-subpath-test-downwardapi-ttmt" in namespace "subpath-8147" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:30:26.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8147" for this suite. • [SLOW TEST:26.126 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":-1,"completed":25,"skipped":359,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:28:06.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-9993 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a new StatefulSet Nov 5 23:28:06.556: INFO: Found 0 stateful pods, waiting for 3 Nov 5 23:28:16.561: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Nov 5 23:28:16.561: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Nov 5 23:28:16.561: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Nov 5 23:28:26.560: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Nov 5 23:28:26.560: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Nov 5 23:28:26.560: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-1 Nov 5 23:28:26.588: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Nov 5 23:28:36.619: INFO: Updating stateful set ss2 Nov 5 23:28:36.623: INFO: Waiting for Pod statefulset-9993/ss2-2 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 STEP: Restoring Pods to the correct revision when they are deleted Nov 5 23:28:46.647: INFO: Found 1 stateful pods, waiting for 3 Nov 5 23:28:56.652: INFO: Found 2 stateful pods, waiting for 3 Nov 5 23:29:06.651: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Nov 5 23:29:06.651: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Nov 5 23:29:06.651: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Nov 5 23:29:16.654: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Nov 5 23:29:16.654: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Nov 5 23:29:16.654: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Nov 5 23:29:16.677: INFO: Updating stateful set ss2 Nov 5 23:29:16.681: INFO: Waiting for Pod statefulset-9993/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Nov 5 23:29:26.707: INFO: Updating stateful set ss2 Nov 5 23:29:26.712: INFO: Waiting for StatefulSet statefulset-9993/ss2 to complete update Nov 5 23:29:26.712: INFO: Waiting for Pod statefulset-9993/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Nov 5 23:29:36.717: INFO: Waiting for StatefulSet statefulset-9993/ss2 to complete update [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Nov 5 23:29:46.718: INFO: Deleting all statefulset in ns statefulset-9993 Nov 5 23:29:46.719: INFO: Scaling statefulset ss2 to 0 Nov 5 23:30:26.735: INFO: Waiting for statefulset status.replicas updated to 0 Nov 5 23:30:26.737: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:30:26.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9993" for this suite. • [SLOW TEST:140.234 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":-1,"completed":24,"skipped":504,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:30:26.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingress STEP: Waiting for a default service account to be provisioned in namespace [It] should support creating Ingress API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Nov 5 23:30:26.881: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Nov 5 23:30:26.884: INFO: starting watch STEP: patching STEP: updating Nov 5 23:30:26.898: INFO: waiting for watch events with expected annotations Nov 5 23:30:26.898: INFO: saw patched and updated annotations STEP: patching /status STEP: updating /status STEP: get /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:30:26.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingress-755" for this suite. • ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:30:00.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-4831 STEP: creating service affinity-clusterip in namespace services-4831 STEP: creating replication controller affinity-clusterip in namespace services-4831 I1105 23:30:00.710829 34 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-4831, replica count: 3 I1105 23:30:03.762253 34 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1105 23:30:06.763473 34 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 5 23:30:06.767: INFO: Creating new exec pod Nov 5 23:30:13.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4831 exec execpod-affinitylr8sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Nov 5 23:30:14.009: INFO: stderr: "+ nc -v -t -w 2 affinity-clusterip 80\n+ echo hostName\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Nov 5 23:30:14.009: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Nov 5 23:30:14.009: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4831 exec execpod-affinitylr8sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.28.92 80' Nov 5 23:30:14.236: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.28.92 80\nConnection to 10.233.28.92 80 port [tcp/http] succeeded!\n" Nov 5 23:30:14.236: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Nov 5 23:30:14.236: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4831 exec execpod-affinitylr8sd -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.233.28.92:80/ ; done' Nov 5 23:30:14.540: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.28.92:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.28.92:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.28.92:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.28.92:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.28.92:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.28.92:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.28.92:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.28.92:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.28.92:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.28.92:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.28.92:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.28.92:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.28.92:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.28.92:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.28.92:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.28.92:80/\n" Nov 5 23:30:14.541: INFO: stdout: "\naffinity-clusterip-v6tt4\naffinity-clusterip-v6tt4\naffinity-clusterip-v6tt4\naffinity-clusterip-v6tt4\naffinity-clusterip-v6tt4\naffinity-clusterip-v6tt4\naffinity-clusterip-v6tt4\naffinity-clusterip-v6tt4\naffinity-clusterip-v6tt4\naffinity-clusterip-v6tt4\naffinity-clusterip-v6tt4\naffinity-clusterip-v6tt4\naffinity-clusterip-v6tt4\naffinity-clusterip-v6tt4\naffinity-clusterip-v6tt4\naffinity-clusterip-v6tt4" Nov 5 23:30:14.541: INFO: Received response from host: affinity-clusterip-v6tt4 Nov 5 23:30:14.541: INFO: Received response from host: affinity-clusterip-v6tt4 Nov 5 23:30:14.541: INFO: Received response from host: affinity-clusterip-v6tt4 Nov 5 23:30:14.541: INFO: Received response from host: affinity-clusterip-v6tt4 Nov 5 23:30:14.541: INFO: Received response from host: affinity-clusterip-v6tt4 Nov 5 23:30:14.541: INFO: Received response from host: affinity-clusterip-v6tt4 Nov 5 23:30:14.541: INFO: Received response from host: affinity-clusterip-v6tt4 Nov 5 23:30:14.541: INFO: Received response from host: affinity-clusterip-v6tt4 Nov 5 23:30:14.541: INFO: Received response from host: affinity-clusterip-v6tt4 Nov 5 23:30:14.541: INFO: Received response from host: affinity-clusterip-v6tt4 Nov 5 23:30:14.541: INFO: Received response from host: affinity-clusterip-v6tt4 Nov 5 23:30:14.541: INFO: Received response from host: affinity-clusterip-v6tt4 Nov 5 23:30:14.541: INFO: Received response from host: affinity-clusterip-v6tt4 Nov 5 23:30:14.541: INFO: Received response from host: affinity-clusterip-v6tt4 Nov 5 23:30:14.541: INFO: Received response from host: affinity-clusterip-v6tt4 Nov 5 23:30:14.541: INFO: Received response from host: affinity-clusterip-v6tt4 Nov 5 23:30:14.541: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip in namespace services-4831, will wait for the garbage collector to delete the pods Nov 5 23:30:14.605: INFO: Deleting ReplicationController affinity-clusterip took: 4.330903ms Nov 5 23:30:14.706: INFO: Terminating ReplicationController affinity-clusterip pods took: 100.898218ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:30:29.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4831" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:28.340 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":33,"skipped":479,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:30:13.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Nov 5 23:30:14.140: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Nov 5 23:30:16.149: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751814, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751814, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751814, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751814, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 5 23:30:18.153: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751814, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751814, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751814, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751814, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 5 23:30:21.160: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 5 23:30:21.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-225-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:30:29.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6887" for this suite. STEP: Destroying namespace "webhook-6887-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.969 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":-1,"completed":25,"skipped":424,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:28:08.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-3543 STEP: creating service affinity-nodeport-transition in namespace services-3543 STEP: creating replication controller affinity-nodeport-transition in namespace services-3543 I1105 23:28:08.881717 28 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-3543, replica count: 3 I1105 23:28:11.933156 28 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1105 23:28:14.933666 28 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 5 23:28:14.944: INFO: Creating new exec pod Nov 5 23:28:19.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Nov 5 23:28:20.203: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Nov 5 23:28:20.203: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Nov 5 23:28:20.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.31.22 80' Nov 5 23:28:20.443: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.31.22 80\nConnection to 10.233.31.22 80 port [tcp/http] succeeded!\n" Nov 5 23:28:20.443: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Nov 5 23:28:20.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:28:20.699: INFO: rc: 1 Nov 5 23:28:20.699: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:21.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:28:21.925: INFO: rc: 1 Nov 5 23:28:21.925: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:22.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:28:22.933: INFO: rc: 1 Nov 5 23:28:22.933: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:23.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:28:23.950: INFO: rc: 1 Nov 5 23:28:23.950: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:24.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:28:24.934: INFO: rc: 1 Nov 5 23:28:24.934: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:25.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:28:25.941: INFO: rc: 1 Nov 5 23:28:25.941: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:26.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:28:26.949: INFO: rc: 1 Nov 5 23:28:26.949: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:27.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:28:27.927: INFO: rc: 1 Nov 5 23:28:27.927: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:28.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:28:28.932: INFO: rc: 1 Nov 5 23:28:28.933: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:29.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:28:30.385: INFO: rc: 1 Nov 5 23:28:30.385: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:30.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:28:31.234: INFO: rc: 1 Nov 5 23:28:31.234: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:31.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:28:32.204: INFO: rc: 1 Nov 5 23:28:32.204: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:32.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:28:33.082: INFO: rc: 1 Nov 5 23:28:33.082: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:33.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:28:33.962: INFO: rc: 1 Nov 5 23:28:33.962: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:34.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:28:34.955: INFO: rc: 1 Nov 5 23:28:34.955: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:35.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:28:35.960: INFO: rc: 1 Nov 5 23:28:35.960: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:36.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:28:36.949: INFO: rc: 1 Nov 5 23:28:36.949: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:37.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:28:38.692: INFO: rc: 1 Nov 5 23:28:38.692: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:38.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:28:39.140: INFO: rc: 1 Nov 5 23:28:39.140: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:39.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:28:39.946: INFO: rc: 1 Nov 5 23:28:39.946: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:40.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:28:41.029: INFO: rc: 1 Nov 5 23:28:41.029: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:41.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:28:41.998: INFO: rc: 1 Nov 5 23:28:41.998: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:42.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:28:43.040: INFO: rc: 1 Nov 5 23:28:43.041: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:43.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:28:43.939: INFO: rc: 1 Nov 5 23:28:43.939: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:44.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:28:45.313: INFO: rc: 1 Nov 5 23:28:45.313: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:45.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:28:45.981: INFO: rc: 1 Nov 5 23:28:45.981: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:46.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:28:47.085: INFO: rc: 1 Nov 5 23:28:47.085: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:47.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:28:47.964: INFO: rc: 1 Nov 5 23:28:47.964: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:48.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:28:49.373: INFO: rc: 1 Nov 5 23:28:49.373: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:49.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:28:49.963: INFO: rc: 1 Nov 5 23:28:49.963: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:50.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:28:51.062: INFO: rc: 1 Nov 5 23:28:51.062: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:51.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:28:52.385: INFO: rc: 1 Nov 5 23:28:52.385: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:52.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:28:52.941: INFO: rc: 1 Nov 5 23:28:52.941: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:53.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:28:53.964: INFO: rc: 1 Nov 5 23:28:53.964: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:54.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:28:54.955: INFO: rc: 1 Nov 5 23:28:54.955: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:55.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:28:55.950: INFO: rc: 1 Nov 5 23:28:55.950: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:56.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:28:56.949: INFO: rc: 1 Nov 5 23:28:56.949: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:57.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:28:57.974: INFO: rc: 1 Nov 5 23:28:57.974: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:58.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:28:58.936: INFO: rc: 1 Nov 5 23:28:58.936: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:28:59.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:29:00.245: INFO: rc: 1 Nov 5 23:29:00.245: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:00.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:29:00.930: INFO: rc: 1 Nov 5 23:29:00.930: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:01.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:29:02.151: INFO: rc: 1 Nov 5 23:29:02.151: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:02.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:29:02.989: INFO: rc: 1 Nov 5 23:29:02.989: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:03.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:29:04.019: INFO: rc: 1 Nov 5 23:29:04.019: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:04.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:29:04.930: INFO: rc: 1 Nov 5 23:29:04.930: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:05.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:29:05.941: INFO: rc: 1 Nov 5 23:29:05.941: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:06.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:29:07.011: INFO: rc: 1 Nov 5 23:29:07.011: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:07.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:29:08.695: INFO: rc: 1 Nov 5 23:29:08.695: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:08.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:29:08.988: INFO: rc: 1 Nov 5 23:29:08.988: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:09.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:29:10.245: INFO: rc: 1 Nov 5 23:29:10.245: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:10.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:29:10.963: INFO: rc: 1 Nov 5 23:29:10.963: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:11.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:29:11.961: INFO: rc: 1 Nov 5 23:29:11.961: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31392 + echo hostName nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:12.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:29:12.970: INFO: rc: 1 Nov 5 23:29:12.970: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:13.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:29:13.953: INFO: rc: 1 Nov 5 23:29:13.953: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:14.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:29:14.960: INFO: rc: 1 Nov 5 23:29:14.960: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:15.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:29:15.978: INFO: rc: 1 Nov 5 23:29:15.978: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:16.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:29:16.945: INFO: rc: 1 Nov 5 23:29:16.945: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:17.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:29:17.963: INFO: rc: 1 Nov 5 23:29:17.963: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:18.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:29:18.963: INFO: rc: 1 Nov 5 23:29:18.963: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:19.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:29:19.947: INFO: rc: 1 Nov 5 23:29:19.947: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:20.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:29:20.950: INFO: rc: 1 Nov 5 23:29:20.950: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:21.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:29:21.966: INFO: rc: 1 Nov 5 23:29:21.966: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:22.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:29:23.015: INFO: rc: 1 Nov 5 23:29:23.015: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:23.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:29:23.951: INFO: rc: 1 Nov 5 23:29:23.951: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:24.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:29:24.957: INFO: rc: 1 Nov 5 23:29:24.958: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:25.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:29:26.087: INFO: rc: 1 Nov 5 23:29:26.087: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:26.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:29:27.013: INFO: rc: 1 Nov 5 23:29:27.013: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:27.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:29:28.543: INFO: rc: 1 Nov 5 23:29:28.543: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:28.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:29:29.247: INFO: rc: 1 Nov 5 23:29:29.247: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:29.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:29:30.071: INFO: rc: 1 Nov 5 23:29:30.071: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:30.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:29:31.523: INFO: rc: 1 Nov 5 23:29:31.523: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31392 + echo hostName nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:31.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:29:31.950: INFO: rc: 1 Nov 5 23:29:31.951: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:32.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:29:32.943: INFO: rc: 1 Nov 5 23:29:32.943: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:33.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:29:33.967: INFO: rc: 1 Nov 5 23:29:33.967: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:34.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:29:35.624: INFO: rc: 1 Nov 5 23:29:35.624: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:35.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:29:35.976: INFO: rc: 1 Nov 5 23:29:35.976: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:36.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:29:36.938: INFO: rc: 1 Nov 5 23:29:36.938: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:37.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:29:37.989: INFO: rc: 1 Nov 5 23:29:37.989: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:38.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:29:38.977: INFO: rc: 1 Nov 5 23:29:38.978: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:39.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:29:39.983: INFO: rc: 1 Nov 5 23:29:39.983: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:40.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:29:41.002: INFO: rc: 1 Nov 5 23:29:41.002: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:41.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:29:42.208: INFO: rc: 1 Nov 5 23:29:42.208: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:42.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:29:43.253: INFO: rc: 1 Nov 5 23:29:43.253: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:43.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:29:44.220: INFO: rc: 1 Nov 5 23:29:44.220: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:44.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:29:44.935: INFO: rc: 1 Nov 5 23:29:44.935: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:45.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:29:46.039: INFO: rc: 1 Nov 5 23:29:46.040: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:46.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:29:47.204: INFO: rc: 1 Nov 5 23:29:47.204: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:47.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:29:47.960: INFO: rc: 1 Nov 5 23:29:47.960: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo+ hostName nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:48.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:29:49.044: INFO: rc: 1 Nov 5 23:29:49.044: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31392 + echo hostName nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:49.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:29:49.938: INFO: rc: 1 Nov 5 23:29:49.939: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:50.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:29:50.941: INFO: rc: 1 Nov 5 23:29:50.941: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:51.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:29:51.950: INFO: rc: 1 Nov 5 23:29:51.950: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + nc -v+ -t -w 2echo 10.10.190.207 31392 hostName nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:52.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:29:52.970: INFO: rc: 1 Nov 5 23:29:52.970: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:53.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:29:53.965: INFO: rc: 1 Nov 5 23:29:53.965: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:54.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:29:54.934: INFO: rc: 1 Nov 5 23:29:54.934: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:55.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:29:56.307: INFO: rc: 1 Nov 5 23:29:56.307: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:56.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:29:57.167: INFO: rc: 1 Nov 5 23:29:57.167: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:57.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:29:57.999: INFO: rc: 1 Nov 5 23:29:57.999: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:58.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:29:59.080: INFO: rc: 1 Nov 5 23:29:59.080: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:29:59.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:29:59.978: INFO: rc: 1 Nov 5 23:29:59.978: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:30:00.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:30:00.940: INFO: rc: 1 Nov 5 23:30:00.940: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:30:01.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:30:02.301: INFO: rc: 1 Nov 5 23:30:02.301: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:30:02.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:30:02.982: INFO: rc: 1 Nov 5 23:30:02.982: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:30:03.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:30:03.947: INFO: rc: 1 Nov 5 23:30:03.947: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:30:04.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:30:04.930: INFO: rc: 1 Nov 5 23:30:04.930: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:30:05.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:30:05.938: INFO: rc: 1 Nov 5 23:30:05.938: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:30:06.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:30:06.923: INFO: rc: 1 Nov 5 23:30:06.923: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:30:07.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:30:08.870: INFO: rc: 1 Nov 5 23:30:08.870: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:30:09.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:30:09.989: INFO: rc: 1 Nov 5 23:30:09.989: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:30:10.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:30:11.258: INFO: rc: 1 Nov 5 23:30:11.258: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:30:11.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:30:11.954: INFO: rc: 1 Nov 5 23:30:11.954: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:30:12.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:30:12.962: INFO: rc: 1 Nov 5 23:30:12.962: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:30:13.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:30:13.931: INFO: rc: 1 Nov 5 23:30:13.931: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:30:14.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:30:15.103: INFO: rc: 1 Nov 5 23:30:15.103: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:30:15.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:30:15.926: INFO: rc: 1 Nov 5 23:30:15.926: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:30:16.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:30:16.969: INFO: rc: 1 Nov 5 23:30:16.969: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:30:17.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:30:17.965: INFO: rc: 1 Nov 5 23:30:17.965: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:30:18.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:30:18.942: INFO: rc: 1 Nov 5 23:30:18.942: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:30:19.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:30:19.958: INFO: rc: 1 Nov 5 23:30:19.959: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:30:20.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:30:20.947: INFO: rc: 1 Nov 5 23:30:20.947: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:30:20.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392' Nov 5 23:30:21.187: INFO: rc: 1 Nov 5 23:30:21.187: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3543 exec execpod-affinitywtzlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31392: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31392 nc: connect to 10.10.190.207 port 31392 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 5 23:30:21.187: FAIL: Unexpected error: <*errors.errorString | 0xc0047a8940>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31392 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31392 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithOptionalTransition(0xc0014dd600, 0x779f8f8, 0xc000afdce0, 0xc000b2c780, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2572 +0x625 k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithTransition(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2527 k8s.io/kubernetes/test/e2e/network.glob..func24.27() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1862 +0xa5 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00323c180) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc00323c180) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc00323c180, 0x70e7b58) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 Nov 5 23:30:21.189: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-3543, will wait for the garbage collector to delete the pods Nov 5 23:30:21.253: INFO: Deleting ReplicationController affinity-nodeport-transition took: 4.214339ms Nov 5 23:30:21.354: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 100.748165ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-3543". STEP: Found 27 events. Nov 5 23:30:38.873: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-transition-c9bcs: { } Scheduled: Successfully assigned services-3543/affinity-nodeport-transition-c9bcs to node2 Nov 5 23:30:38.873: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-transition-dbbbd: { } Scheduled: Successfully assigned services-3543/affinity-nodeport-transition-dbbbd to node2 Nov 5 23:30:38.873: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-transition-wrj2s: { } Scheduled: Successfully assigned services-3543/affinity-nodeport-transition-wrj2s to node1 Nov 5 23:30:38.873: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpod-affinitywtzlg: { } Scheduled: Successfully assigned services-3543/execpod-affinitywtzlg to node1 Nov 5 23:30:38.873: INFO: At 2021-11-05 23:28:08 +0000 UTC - event for affinity-nodeport-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-transition-c9bcs Nov 5 23:30:38.873: INFO: At 2021-11-05 23:28:08 +0000 UTC - event for affinity-nodeport-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-transition-wrj2s Nov 5 23:30:38.873: INFO: At 2021-11-05 23:28:08 +0000 UTC - event for affinity-nodeport-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-transition-dbbbd Nov 5 23:30:38.873: INFO: At 2021-11-05 23:28:10 +0000 UTC - event for affinity-nodeport-transition-c9bcs: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Nov 5 23:30:38.873: INFO: At 2021-11-05 23:28:10 +0000 UTC - event for affinity-nodeport-transition-wrj2s: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Nov 5 23:30:38.873: INFO: At 2021-11-05 23:28:11 +0000 UTC - event for affinity-nodeport-transition-c9bcs: {kubelet node2} Started: Started container affinity-nodeport-transition Nov 5 23:30:38.874: INFO: At 2021-11-05 23:28:11 +0000 UTC - event for affinity-nodeport-transition-c9bcs: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 830.578713ms Nov 5 23:30:38.874: INFO: At 2021-11-05 23:28:11 +0000 UTC - event for affinity-nodeport-transition-c9bcs: {kubelet node2} Created: Created container affinity-nodeport-transition Nov 5 23:30:38.874: INFO: At 2021-11-05 23:28:11 +0000 UTC - event for affinity-nodeport-transition-dbbbd: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 636.247141ms Nov 5 23:30:38.874: INFO: At 2021-11-05 23:28:11 +0000 UTC - event for affinity-nodeport-transition-dbbbd: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Nov 5 23:30:38.874: INFO: At 2021-11-05 23:28:11 +0000 UTC - event for affinity-nodeport-transition-wrj2s: {kubelet node1} Started: Started container affinity-nodeport-transition Nov 5 23:30:38.874: INFO: At 2021-11-05 23:28:11 +0000 UTC - event for affinity-nodeport-transition-wrj2s: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 393.417972ms Nov 5 23:30:38.874: INFO: At 2021-11-05 23:28:11 +0000 UTC - event for affinity-nodeport-transition-wrj2s: {kubelet node1} Created: Created container affinity-nodeport-transition Nov 5 23:30:38.874: INFO: At 2021-11-05 23:28:12 +0000 UTC - event for affinity-nodeport-transition-dbbbd: {kubelet node2} Started: Started container affinity-nodeport-transition Nov 5 23:30:38.874: INFO: At 2021-11-05 23:28:12 +0000 UTC - event for affinity-nodeport-transition-dbbbd: {kubelet node2} Created: Created container affinity-nodeport-transition Nov 5 23:30:38.874: INFO: At 2021-11-05 23:28:16 +0000 UTC - event for execpod-affinitywtzlg: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Nov 5 23:30:38.874: INFO: At 2021-11-05 23:28:17 +0000 UTC - event for execpod-affinitywtzlg: {kubelet node1} Created: Created container agnhost-container Nov 5 23:30:38.874: INFO: At 2021-11-05 23:28:17 +0000 UTC - event for execpod-affinitywtzlg: {kubelet node1} Started: Started container agnhost-container Nov 5 23:30:38.874: INFO: At 2021-11-05 23:28:17 +0000 UTC - event for execpod-affinitywtzlg: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 295.270717ms Nov 5 23:30:38.874: INFO: At 2021-11-05 23:30:21 +0000 UTC - event for affinity-nodeport-transition-c9bcs: {kubelet node2} Killing: Stopping container affinity-nodeport-transition Nov 5 23:30:38.874: INFO: At 2021-11-05 23:30:21 +0000 UTC - event for affinity-nodeport-transition-dbbbd: {kubelet node2} Killing: Stopping container affinity-nodeport-transition Nov 5 23:30:38.874: INFO: At 2021-11-05 23:30:21 +0000 UTC - event for affinity-nodeport-transition-wrj2s: {kubelet node1} Killing: Stopping container affinity-nodeport-transition Nov 5 23:30:38.874: INFO: At 2021-11-05 23:30:21 +0000 UTC - event for execpod-affinitywtzlg: {kubelet node1} Killing: Stopping container agnhost-container Nov 5 23:30:38.876: INFO: POD NODE PHASE GRACE CONDITIONS Nov 5 23:30:38.876: INFO: Nov 5 23:30:38.880: INFO: Logging node info for node master1 Nov 5 23:30:38.882: INFO: Node Info: &Node{ObjectMeta:{master1 acabf68f-e6fa-4376-87a7-953399a106b3 50198 0 2021-11-05 20:58:52 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-11-05 20:58:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-11-05 21:01:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-11-05 21:01:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-11-05 21:06:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-05 21:04:29 +0000 UTC,LastTransitionTime:2021-11-05 21:04:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-05 23:30:29 +0000 UTC,LastTransitionTime:2021-11-05 20:58:50 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-05 23:30:29 +0000 UTC,LastTransitionTime:2021-11-05 20:58:50 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-05 23:30:29 +0000 UTC,LastTransitionTime:2021-11-05 20:58:50 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-05 23:30:29 +0000 UTC,LastTransitionTime:2021-11-05 21:01:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b66bbe4d404942179ce344aa1da0c494,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:b59c0f0e-9c14-460c-acfa-6e83037bd04e,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724468800,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:50c0d65dc6b2d7618e01dd2fb431c2312ad7490d0461d040b112b04809b07401 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:2012644549052fa07c43b0d19f320c871a25e105d0b23e33645e4f1bcf8fcd97 nginx:1.20.1-alpine],SizeBytes:22650454,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 5 23:30:38.883: INFO: Logging kubelet events for node master1 Nov 5 23:30:38.886: INFO: Logging pods the kubelet thinks is on node master1 Nov 5 23:30:38.906: INFO: kube-proxy-r4cf7 started at 2021-11-05 21:00:42 +0000 UTC (0+1 container statuses recorded) Nov 5 23:30:38.906: INFO: Container kube-proxy ready: true, restart count 1 Nov 5 23:30:38.906: INFO: kube-multus-ds-amd64-rr699 started at 2021-11-05 21:01:44 +0000 UTC (0+1 container statuses recorded) Nov 5 23:30:38.906: INFO: Container kube-multus ready: true, restart count 1 Nov 5 23:30:38.906: INFO: container-registry-65d7c44b96-dwrs5 started at 2021-11-05 21:06:01 +0000 UTC (0+2 container statuses recorded) Nov 5 23:30:38.906: INFO: Container docker-registry ready: true, restart count 0 Nov 5 23:30:38.906: INFO: Container nginx ready: true, restart count 0 Nov 5 23:30:38.906: INFO: node-exporter-lgdzv started at 2021-11-05 21:14:48 +0000 UTC (0+2 container statuses recorded) Nov 5 23:30:38.906: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 5 23:30:38.906: INFO: Container node-exporter ready: true, restart count 0 Nov 5 23:30:38.906: INFO: kube-apiserver-master1 started at 2021-11-05 21:00:02 +0000 UTC (0+1 container statuses recorded) Nov 5 23:30:38.906: INFO: Container kube-apiserver ready: true, restart count 0 Nov 5 23:30:38.906: INFO: kube-controller-manager-master1 started at 2021-11-05 21:00:02 +0000 UTC (0+1 container statuses recorded) Nov 5 23:30:38.906: INFO: Container kube-controller-manager ready: true, restart count 3 Nov 5 23:30:38.906: INFO: kube-scheduler-master1 started at 2021-11-05 21:00:02 +0000 UTC (0+1 container statuses recorded) Nov 5 23:30:38.906: INFO: Container kube-scheduler ready: true, restart count 0 Nov 5 23:30:38.906: INFO: kube-flannel-hkkhj started at 2021-11-05 21:01:36 +0000 UTC (1+1 container statuses recorded) Nov 5 23:30:38.906: INFO: Init container install-cni ready: true, restart count 2 Nov 5 23:30:38.906: INFO: Container kube-flannel ready: true, restart count 2 Nov 5 23:30:38.906: INFO: coredns-8474476ff8-nq2jw started at 2021-11-05 21:02:14 +0000 UTC (0+1 container statuses recorded) Nov 5 23:30:38.906: INFO: Container coredns ready: true, restart count 2 W1105 23:30:38.924216 28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 5 23:30:39.003: INFO: Latency metrics for node master1 Nov 5 23:30:39.003: INFO: Logging node info for node master2 Nov 5 23:30:39.006: INFO: Node Info: &Node{ObjectMeta:{master2 004d4571-8588-4d18-93d0-ad0af4174866 50287 0 2021-11-05 20:59:23 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-11-05 20:59:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2021-11-05 21:00:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2021-11-05 21:01:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-11-05 21:09:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2021-11-05 21:09:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-05 21:04:41 +0000 UTC,LastTransitionTime:2021-11-05 21:04:41 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-05 23:30:32 +0000 UTC,LastTransitionTime:2021-11-05 20:59:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-05 23:30:32 +0000 UTC,LastTransitionTime:2021-11-05 20:59:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-05 23:30:32 +0000 UTC,LastTransitionTime:2021-11-05 20:59:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-05 23:30:32 +0000 UTC,LastTransitionTime:2021-11-05 21:01:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:0f1bc4b4acc1463992265eb9f006d2f4,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:d0e797a3-7d35-4e63-b584-b18006ef67fe,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724468800,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 5 23:30:39.006: INFO: Logging kubelet events for node master2 Nov 5 23:30:39.010: INFO: Logging pods the kubelet thinks is on node master2 Nov 5 23:30:39.020: INFO: node-exporter-8mxjv started at 2021-11-05 21:14:48 +0000 UTC (0+2 container statuses recorded) Nov 5 23:30:39.020: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 5 23:30:39.020: INFO: Container node-exporter ready: true, restart count 0 Nov 5 23:30:39.020: INFO: kube-apiserver-master2 started at 2021-11-05 21:00:02 +0000 UTC (0+1 container statuses recorded) Nov 5 23:30:39.020: INFO: Container kube-apiserver ready: true, restart count 0 Nov 5 23:30:39.020: INFO: kube-scheduler-master2 started at 2021-11-05 21:08:18 +0000 UTC (0+1 container statuses recorded) Nov 5 23:30:39.020: INFO: Container kube-scheduler ready: true, restart count 3 Nov 5 23:30:39.020: INFO: kube-multus-ds-amd64-m5646 started at 2021-11-05 21:01:44 +0000 UTC (0+1 container statuses recorded) Nov 5 23:30:39.020: INFO: Container kube-multus ready: true, restart count 1 Nov 5 23:30:39.020: INFO: node-feature-discovery-controller-cff799f9f-8cg9j started at 2021-11-05 21:09:34 +0000 UTC (0+1 container statuses recorded) Nov 5 23:30:39.020: INFO: Container nfd-controller ready: true, restart count 0 Nov 5 23:30:39.020: INFO: kube-controller-manager-master2 started at 2021-11-05 21:04:18 +0000 UTC (0+1 container statuses recorded) Nov 5 23:30:39.020: INFO: Container kube-controller-manager ready: true, restart count 2 Nov 5 23:30:39.020: INFO: kube-proxy-9vm9v started at 2021-11-05 21:00:42 +0000 UTC (0+1 container statuses recorded) Nov 5 23:30:39.020: INFO: Container kube-proxy ready: true, restart count 1 Nov 5 23:30:39.020: INFO: kube-flannel-g7q4k started at 2021-11-05 21:01:36 +0000 UTC (1+1 container statuses recorded) Nov 5 23:30:39.020: INFO: Init container install-cni ready: true, restart count 0 Nov 5 23:30:39.020: INFO: Container kube-flannel ready: true, restart count 3 W1105 23:30:39.035420 28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 5 23:30:39.626: INFO: Latency metrics for node master2 Nov 5 23:30:39.626: INFO: Logging node info for node master3 Nov 5 23:30:39.629: INFO: Node Info: &Node{ObjectMeta:{master3 d3395dfc-1d8f-4527-88b4-7f472f6a6c0f 50381 0 2021-11-05 20:59:34 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-11-05 20:59:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-11-05 21:01:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-11-05 21:01:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-11-05 21:12:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-05 21:04:26 +0000 UTC,LastTransitionTime:2021-11-05 21:04:26 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-05 23:30:37 +0000 UTC,LastTransitionTime:2021-11-05 20:59:34 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-05 23:30:37 +0000 UTC,LastTransitionTime:2021-11-05 20:59:34 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-05 23:30:37 +0000 UTC,LastTransitionTime:2021-11-05 20:59:34 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-05 23:30:37 +0000 UTC,LastTransitionTime:2021-11-05 21:04:19 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:006015d4e2a7441aa293fbb9db938e38,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:a0f65291-184f-4994-a7ea-d1a5b4d71ffa,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724468800,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 5 23:30:39.630: INFO: Logging kubelet events for node master3 Nov 5 23:30:39.632: INFO: Logging pods the kubelet thinks is on node master3 Nov 5 23:30:39.642: INFO: kube-controller-manager-master3 started at 2021-11-05 21:00:02 +0000 UTC (0+1 container statuses recorded) Nov 5 23:30:39.642: INFO: Container kube-controller-manager ready: true, restart count 2 Nov 5 23:30:39.642: INFO: kube-flannel-f55xz started at 2021-11-05 21:01:36 +0000 UTC (1+1 container statuses recorded) Nov 5 23:30:39.642: INFO: Init container install-cni ready: true, restart count 0 Nov 5 23:30:39.642: INFO: Container kube-flannel ready: true, restart count 1 Nov 5 23:30:39.642: INFO: coredns-8474476ff8-qbn9j started at 2021-11-05 21:02:10 +0000 UTC (0+1 container statuses recorded) Nov 5 23:30:39.642: INFO: Container coredns ready: true, restart count 1 Nov 5 23:30:39.642: INFO: node-exporter-mqcvx started at 2021-11-05 21:14:48 +0000 UTC (0+2 container statuses recorded) Nov 5 23:30:39.642: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 5 23:30:39.642: INFO: Container node-exporter ready: true, restart count 0 Nov 5 23:30:39.642: INFO: kube-apiserver-master3 started at 2021-11-05 21:04:19 +0000 UTC (0+1 container statuses recorded) Nov 5 23:30:39.642: INFO: Container kube-apiserver ready: true, restart count 0 Nov 5 23:30:39.642: INFO: kube-proxy-s2pzt started at 2021-11-05 21:00:42 +0000 UTC (0+1 container statuses recorded) Nov 5 23:30:39.642: INFO: Container kube-proxy ready: true, restart count 2 Nov 5 23:30:39.642: INFO: kube-multus-ds-amd64-cp25f started at 2021-11-05 21:01:44 +0000 UTC (0+1 container statuses recorded) Nov 5 23:30:39.642: INFO: Container kube-multus ready: true, restart count 1 Nov 5 23:30:39.642: INFO: dns-autoscaler-7df78bfcfb-z9dxm started at 2021-11-05 21:02:12 +0000 UTC (0+1 container statuses recorded) Nov 5 23:30:39.642: INFO: Container autoscaler ready: true, restart count 1 Nov 5 23:30:39.642: INFO: kube-scheduler-master3 started at 2021-11-05 21:08:19 +0000 UTC (0+1 container statuses recorded) Nov 5 23:30:39.642: INFO: Container kube-scheduler ready: true, restart count 3 W1105 23:30:39.658822 28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 5 23:30:39.729: INFO: Latency metrics for node master3 Nov 5 23:30:39.729: INFO: Logging node info for node node1 Nov 5 23:30:39.731: INFO: Node Info: &Node{ObjectMeta:{node1 290b18e7-da33-4da8-b78a-8a7f28c49abf 50395 0 2021-11-05 21:00:39 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-11-05 21:00:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-11-05 21:00:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-11-05 21:01:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-11-05 21:09:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-11-05 21:13:07 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-11-05 21:13:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-05 21:04:40 +0000 UTC,LastTransitionTime:2021-11-05 21:04:40 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-05 23:30:39 +0000 UTC,LastTransitionTime:2021-11-05 21:00:39 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-05 23:30:39 +0000 UTC,LastTransitionTime:2021-11-05 21:00:39 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-05 23:30:39 +0000 UTC,LastTransitionTime:2021-11-05 21:00:39 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-05 23:30:39 +0000 UTC,LastTransitionTime:2021-11-05 21:01:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7f2fc144f1734ec29780a435d0602675,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:7c24c54c-15ba-4c20-b196-32ad0c82be71,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003432896,},ContainerImage{Names:[localhost:30500/cmk@sha256:8c85a99f0d621f4580064e48909402b99a2e98aff1c1e4164e4e52d46568c5be cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724468800,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:e30d246f33998514fb41fc34b11174e739efb54af715b53723b81cc5d2ebd29d nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[localhost:30500/tasextender@sha256:50c0d65dc6b2d7618e01dd2fb431c2312ad7490d0461d040b112b04809b07401 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:a296b4c6f6ee2b88f095b61e95c7ef4f51ba25598835b4978c9256d8c8ace48a alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 5 23:30:39.732: INFO: Logging kubelet events for node node1 Nov 5 23:30:39.734: INFO: Logging pods the kubelet thinks is on node node1 Nov 5 23:30:39.750: INFO: cmk-init-discover-node1-nnkks started at 2021-11-05 21:13:04 +0000 UTC (0+3 container statuses recorded) Nov 5 23:30:39.750: INFO: Container discover ready: false, restart count 0 Nov 5 23:30:39.750: INFO: Container init ready: false, restart count 0 Nov 5 23:30:39.750: INFO: Container install ready: false, restart count 0 Nov 5 23:30:39.750: INFO: node-exporter-fvksz started at 2021-11-05 21:14:48 +0000 UTC (0+2 container statuses recorded) Nov 5 23:30:39.750: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 5 23:30:39.750: INFO: Container node-exporter ready: true, restart count 0 Nov 5 23:30:39.751: INFO: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s started at 2021-11-05 21:17:51 +0000 UTC (0+1 container statuses recorded) Nov 5 23:30:39.751: INFO: Container tas-extender ready: true, restart count 0 Nov 5 23:30:39.751: INFO: kube-multus-ds-amd64-mqrl8 started at 2021-11-05 21:01:44 +0000 UTC (0+1 container statuses recorded) Nov 5 23:30:39.751: INFO: Container kube-multus ready: true, restart count 1 Nov 5 23:30:39.751: INFO: cmk-cfm9r started at 2021-11-05 21:13:47 +0000 UTC (0+2 container statuses recorded) Nov 5 23:30:39.751: INFO: Container nodereport ready: true, restart count 0 Nov 5 23:30:39.751: INFO: Container reconcile ready: true, restart count 0 Nov 5 23:30:39.751: INFO: prometheus-k8s-0 started at 2021-11-05 21:14:58 +0000 UTC (0+4 container statuses recorded) Nov 5 23:30:39.751: INFO: Container config-reloader ready: true, restart count 0 Nov 5 23:30:39.751: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Nov 5 23:30:39.751: INFO: Container grafana ready: true, restart count 0 Nov 5 23:30:39.751: INFO: Container prometheus ready: true, restart count 1 Nov 5 23:30:39.751: INFO: liveness-58ae240d-5759-470f-be6e-c54592abc01a started at 2021-11-05 23:28:35 +0000 UTC (0+1 container statuses recorded) Nov 5 23:30:39.751: INFO: Container agnhost-container ready: true, restart count 0 Nov 5 23:30:39.751: INFO: kube-proxy-mc4cs started at 2021-11-05 21:00:42 +0000 UTC (0+1 container statuses recorded) Nov 5 23:30:39.751: INFO: Container kube-proxy ready: true, restart count 2 Nov 5 23:30:39.751: INFO: test-webserver-e71c8083-eaeb-4cb0-956a-7b0efb4178ab started at 2021-11-05 23:27:29 +0000 UTC (0+1 container statuses recorded) Nov 5 23:30:39.751: INFO: Container test-webserver ready: true, restart count 0 Nov 5 23:30:39.751: INFO: kubernetes-dashboard-785dcbb76d-9wtdz started at 2021-11-05 21:02:14 +0000 UTC (0+1 container statuses recorded) Nov 5 23:30:39.751: INFO: Container kubernetes-dashboard ready: true, restart count 1 Nov 5 23:30:39.751: INFO: cmk-webhook-6c9d5f8578-wq5mk started at 2021-11-05 21:13:47 +0000 UTC (0+1 container statuses recorded) Nov 5 23:30:39.751: INFO: Container cmk-webhook ready: true, restart count 0 Nov 5 23:30:39.751: INFO: collectd-5k6s9 started at 2021-11-05 21:18:40 +0000 UTC (0+3 container statuses recorded) Nov 5 23:30:39.751: INFO: Container collectd ready: true, restart count 0 Nov 5 23:30:39.751: INFO: Container collectd-exporter ready: true, restart count 0 Nov 5 23:30:39.751: INFO: Container rbac-proxy ready: true, restart count 0 Nov 5 23:30:39.751: INFO: affinity-clusterip-timeout-v52rf started at 2021-11-05 23:30:21 +0000 UTC (0+1 container statuses recorded) Nov 5 23:30:39.751: INFO: Container affinity-clusterip-timeout ready: true, restart count 0 Nov 5 23:30:39.751: INFO: nginx-proxy-node1 started at 2021-11-05 21:00:39 +0000 UTC (0+1 container statuses recorded) Nov 5 23:30:39.751: INFO: Container nginx-proxy ready: true, restart count 2 Nov 5 23:30:39.751: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-l4npn started at 2021-11-05 21:10:45 +0000 UTC (0+1 container statuses recorded) Nov 5 23:30:39.751: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 5 23:30:39.751: INFO: execpod-affinitydtlzt started at 2021-11-05 23:30:27 +0000 UTC (0+1 container statuses recorded) Nov 5 23:30:39.751: INFO: Container agnhost-container ready: true, restart count 0 Nov 5 23:30:39.751: INFO: var-expansion-6088554d-bb50-477f-92c8-dc7b74a322eb started at 2021-11-05 23:28:29 +0000 UTC (0+1 container statuses recorded) Nov 5 23:30:39.751: INFO: Container dapi-container ready: true, restart count 0 Nov 5 23:30:39.751: INFO: node-feature-discovery-worker-spmbf started at 2021-11-05 21:09:34 +0000 UTC (0+1 container statuses recorded) Nov 5 23:30:39.751: INFO: Container nfd-worker ready: true, restart count 0 Nov 5 23:30:39.751: INFO: netserver-0 started at 2021-11-05 23:30:26 +0000 UTC (0+1 container statuses recorded) Nov 5 23:30:39.751: INFO: Container webserver ready: false, restart count 0 Nov 5 23:30:39.751: INFO: kube-flannel-hxwks started at 2021-11-05 21:01:36 +0000 UTC (1+1 container statuses recorded) Nov 5 23:30:39.751: INFO: Init container install-cni ready: true, restart count 2 Nov 5 23:30:39.751: INFO: Container kube-flannel ready: true, restart count 3 W1105 23:30:39.765515 28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 5 23:30:39.996: INFO: Latency metrics for node node1 Nov 5 23:30:39.996: INFO: Logging node info for node node2 Nov 5 23:30:39.999: INFO: Node Info: &Node{ObjectMeta:{node2 7d7e71f0-82d7-49ba-b69a-56600dd59b3f 50385 0 2021-11-05 21:00:39 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-11-05 21:00:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-11-05 21:00:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-11-05 21:01:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-11-05 21:09:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-11-05 21:13:29 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-11-05 21:13:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-05 21:04:43 +0000 UTC,LastTransitionTime:2021-11-05 21:04:43 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-05 23:30:38 +0000 UTC,LastTransitionTime:2021-11-05 21:00:39 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-05 23:30:38 +0000 UTC,LastTransitionTime:2021-11-05 21:00:39 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-05 23:30:38 +0000 UTC,LastTransitionTime:2021-11-05 21:00:39 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-05 23:30:38 +0000 UTC,LastTransitionTime:2021-11-05 21:01:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:415d65c0f8484c488059b324e675b5bd,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:c5482a76-3a9a-45bb-ab12-c74550bfe71f,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:724468800,},ContainerImage{Names:[localhost:30500/cmk@sha256:8c85a99f0d621f4580064e48909402b99a2e98aff1c1e4164e4e52d46568c5be localhost:30500/cmk:v1.5.1],SizeBytes:724468800,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:e30d246f33998514fb41fc34b11174e739efb54af715b53723b81cc5d2ebd29d localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 5 23:30:40.003: INFO: Logging kubelet events for node node2 Nov 5 23:30:40.009: INFO: Logging pods the kubelet thinks is on node node2 Nov 5 23:30:40.019: INFO: kubernetes-metrics-scraper-5558854cb-v9vgg started at 2021-11-05 21:02:14 +0000 UTC (0+1 container statuses recorded) Nov 5 23:30:40.019: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Nov 5 23:30:40.019: INFO: var-expansion-1a3ea181-486a-42bf-a7d5-a9a8c29e01c1 started at 2021-11-05 23:30:15 +0000 UTC (0+1 container statuses recorded) Nov 5 23:30:40.019: INFO: Container dapi-container ready: true, restart count 0 Nov 5 23:30:40.019: INFO: affinity-clusterip-timeout-mdrk8 started at 2021-11-05 23:30:21 +0000 UTC (0+1 container statuses recorded) Nov 5 23:30:40.019: INFO: Container affinity-clusterip-timeout ready: true, restart count 0 Nov 5 23:30:40.019: INFO: kube-flannel-cqj7j started at 2021-11-05 21:01:36 +0000 UTC (1+1 container statuses recorded) Nov 5 23:30:40.019: INFO: Init container install-cni ready: true, restart count 1 Nov 5 23:30:40.019: INFO: Container kube-flannel ready: true, restart count 2 Nov 5 23:30:40.019: INFO: kube-proxy-j9lmg started at 2021-11-05 21:00:42 +0000 UTC (0+1 container statuses recorded) Nov 5 23:30:40.019: INFO: Container kube-proxy ready: true, restart count 2 Nov 5 23:30:40.019: INFO: kube-multus-ds-amd64-p7bxx started at 2021-11-05 21:01:44 +0000 UTC (0+1 container statuses recorded) Nov 5 23:30:40.019: INFO: Container kube-multus ready: true, restart count 1 Nov 5 23:30:40.019: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-tzh4p started at 2021-11-05 21:10:45 +0000 UTC (0+1 container statuses recorded) Nov 5 23:30:40.019: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 5 23:30:40.019: INFO: collectd-r2g57 started at 2021-11-05 21:18:40 +0000 UTC (0+3 container statuses recorded) Nov 5 23:30:40.019: INFO: Container collectd ready: true, restart count 0 Nov 5 23:30:40.019: INFO: Container collectd-exporter ready: true, restart count 0 Nov 5 23:30:40.019: INFO: Container rbac-proxy ready: true, restart count 0 Nov 5 23:30:40.019: INFO: affinity-clusterip-timeout-lswfx started at 2021-11-05 23:30:21 +0000 UTC (0+1 container statuses recorded) Nov 5 23:30:40.019: INFO: Container affinity-clusterip-timeout ready: true, restart count 0 Nov 5 23:30:40.019: INFO: netserver-1 started at 2021-11-05 23:30:26 +0000 UTC (0+1 container statuses recorded) Nov 5 23:30:40.019: INFO: Container webserver ready: false, restart count 0 Nov 5 23:30:40.019: INFO: cmk-bnvd2 started at 2021-11-05 21:13:46 +0000 UTC (0+2 container statuses recorded) Nov 5 23:30:40.019: INFO: Container nodereport ready: true, restart count 0 Nov 5 23:30:40.019: INFO: Container reconcile ready: true, restart count 0 Nov 5 23:30:40.019: INFO: prometheus-operator-585ccfb458-vh55q started at 2021-11-05 21:14:41 +0000 UTC (0+2 container statuses recorded) Nov 5 23:30:40.019: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 5 23:30:40.019: INFO: Container prometheus-operator ready: true, restart count 0 Nov 5 23:30:40.019: INFO: ss-0 started at 2021-11-05 23:30:29 +0000 UTC (0+1 container statuses recorded) Nov 5 23:30:40.019: INFO: Container webserver ready: true, restart count 0 Nov 5 23:30:40.019: INFO: nginx-proxy-node2 started at 2021-11-05 21:00:39 +0000 UTC (0+1 container statuses recorded) Nov 5 23:30:40.019: INFO: Container nginx-proxy ready: true, restart count 2 Nov 5 23:30:40.019: INFO: node-feature-discovery-worker-pn6cr started at 2021-11-05 21:09:34 +0000 UTC (0+1 container statuses recorded) Nov 5 23:30:40.019: INFO: Container nfd-worker ready: true, restart count 0 Nov 5 23:30:40.019: INFO: pod-subpath-test-configmap-mb6r started at 2021-11-05 23:30:26 +0000 UTC (0+1 container statuses recorded) Nov 5 23:30:40.019: INFO: Container test-container-subpath-configmap-mb6r ready: true, restart count 0 Nov 5 23:30:40.019: INFO: cmk-init-discover-node2-9svdd started at 2021-11-05 21:13:24 +0000 UTC (0+3 container statuses recorded) Nov 5 23:30:40.019: INFO: Container discover ready: false, restart count 0 Nov 5 23:30:40.019: INFO: Container init ready: false, restart count 0 Nov 5 23:30:40.019: INFO: Container install ready: false, restart count 0 Nov 5 23:30:40.020: INFO: node-exporter-k7p79 started at 2021-11-05 21:14:48 +0000 UTC (0+2 container statuses recorded) Nov 5 23:30:40.020: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 5 23:30:40.020: INFO: Container node-exporter ready: true, restart count 0 W1105 23:30:40.033999 28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 5 23:30:40.237: INFO: Latency metrics for node node2 Nov 5 23:30:40.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3543" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [151.397 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 5 23:30:21.187: Unexpected error: <*errors.errorString | 0xc0047a8940>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31392 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31392 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2572 ------------------------------ {"msg":"FAILED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":32,"skipped":636,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:30:40.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating secret secrets-9554/secret-test-6297f19b-9215-419c-a82f-2feb6f500eac STEP: Creating a pod to test consume secrets Nov 5 23:30:40.325: INFO: Waiting up to 5m0s for pod "pod-configmaps-77aeac57-6299-4165-84e5-efb9f331f5e5" in namespace "secrets-9554" to be "Succeeded or Failed" Nov 5 23:30:40.329: INFO: Pod "pod-configmaps-77aeac57-6299-4165-84e5-efb9f331f5e5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.353405ms Nov 5 23:30:42.332: INFO: Pod "pod-configmaps-77aeac57-6299-4165-84e5-efb9f331f5e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00640424s Nov 5 23:30:44.337: INFO: Pod "pod-configmaps-77aeac57-6299-4165-84e5-efb9f331f5e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011893218s STEP: Saw pod success Nov 5 23:30:44.337: INFO: Pod "pod-configmaps-77aeac57-6299-4165-84e5-efb9f331f5e5" satisfied condition "Succeeded or Failed" Nov 5 23:30:44.340: INFO: Trying to get logs from node node2 pod pod-configmaps-77aeac57-6299-4165-84e5-efb9f331f5e5 container env-test: STEP: delete the pod Nov 5 23:30:44.362: INFO: Waiting for pod pod-configmaps-77aeac57-6299-4165-84e5-efb9f331f5e5 to disappear Nov 5 23:30:44.365: INFO: Pod pod-configmaps-77aeac57-6299-4165-84e5-efb9f331f5e5 no longer exists [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:30:44.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9554" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":33,"skipped":657,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:30:44.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 5 23:30:44.411: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes Nov 5 23:30:44.427: INFO: The status of Pod pod-logs-websocket-f1811e25-21a9-4f9c-847a-606751df2320 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:30:46.430: INFO: The status of Pod pod-logs-websocket-f1811e25-21a9-4f9c-847a-606751df2320 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:30:48.430: INFO: The status of Pod pod-logs-websocket-f1811e25-21a9-4f9c-847a-606751df2320 is Running (Ready = true) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:30:48.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8882" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":-1,"completed":34,"skipped":663,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:30:26.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Performing setup for networking test in namespace pod-network-test-9706 STEP: creating a selector STEP: Creating the service pods in kubernetes Nov 5 23:30:26.541: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 5 23:30:26.579: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:30:28.584: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:30:30.584: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 5 23:30:32.584: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 5 23:30:34.583: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 5 23:30:36.584: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 5 23:30:38.583: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 5 23:30:40.585: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 5 23:30:42.586: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 5 23:30:44.582: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 5 23:30:46.584: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 5 23:30:48.582: INFO: The status of Pod netserver-0 is Running (Ready = true) Nov 5 23:30:48.587: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Nov 5 23:30:52.608: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Nov 5 23:30:52.608: INFO: Breadth first check of 10.244.3.48 on host 10.10.190.207... Nov 5 23:30:52.611: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.50:9080/dial?request=hostname&protocol=http&host=10.244.3.48&port=8080&tries=1'] Namespace:pod-network-test-9706 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 5 23:30:52.611: INFO: >>> kubeConfig: /root/.kube/config Nov 5 23:30:52.699: INFO: Waiting for responses: map[] Nov 5 23:30:52.699: INFO: reached 10.244.3.48 after 0/1 tries Nov 5 23:30:52.699: INFO: Breadth first check of 10.244.4.115 on host 10.10.190.208... Nov 5 23:30:52.702: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.50:9080/dial?request=hostname&protocol=http&host=10.244.4.115&port=8080&tries=1'] Namespace:pod-network-test-9706 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 5 23:30:52.702: INFO: >>> kubeConfig: /root/.kube/config Nov 5 23:30:52.792: INFO: Waiting for responses: map[] Nov 5 23:30:52.792: INFO: reached 10.244.4.115 after 0/1 tries Nov 5 23:30:52.792: INFO: Going to retry 0 out of 2 pods.... [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:30:52.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9706" for this suite. • [SLOW TEST:26.285 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":364,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":-1,"completed":25,"skipped":552,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:30:26.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-configmap-mb6r STEP: Creating a pod to test atomic-volume-subpath Nov 5 23:30:26.970: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-mb6r" in namespace "subpath-8059" to be "Succeeded or Failed" Nov 5 23:30:26.973: INFO: Pod "pod-subpath-test-configmap-mb6r": Phase="Pending", Reason="", readiness=false. Elapsed: 3.093284ms Nov 5 23:30:28.976: INFO: Pod "pod-subpath-test-configmap-mb6r": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00597168s Nov 5 23:30:30.980: INFO: Pod "pod-subpath-test-configmap-mb6r": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009810022s Nov 5 23:30:32.983: INFO: Pod "pod-subpath-test-configmap-mb6r": Phase="Running", Reason="", readiness=true. Elapsed: 6.012904022s Nov 5 23:30:34.986: INFO: Pod "pod-subpath-test-configmap-mb6r": Phase="Running", Reason="", readiness=true. Elapsed: 8.016702531s Nov 5 23:30:36.992: INFO: Pod "pod-subpath-test-configmap-mb6r": Phase="Running", Reason="", readiness=true. Elapsed: 10.021919748s Nov 5 23:30:38.995: INFO: Pod "pod-subpath-test-configmap-mb6r": Phase="Running", Reason="", readiness=true. Elapsed: 12.025328633s Nov 5 23:30:40.999: INFO: Pod "pod-subpath-test-configmap-mb6r": Phase="Running", Reason="", readiness=true. Elapsed: 14.029299073s Nov 5 23:30:43.003: INFO: Pod "pod-subpath-test-configmap-mb6r": Phase="Running", Reason="", readiness=true. Elapsed: 16.033290681s Nov 5 23:30:45.007: INFO: Pod "pod-subpath-test-configmap-mb6r": Phase="Running", Reason="", readiness=true. Elapsed: 18.037634156s Nov 5 23:30:47.012: INFO: Pod "pod-subpath-test-configmap-mb6r": Phase="Running", Reason="", readiness=true. Elapsed: 20.042106424s Nov 5 23:30:49.017: INFO: Pod "pod-subpath-test-configmap-mb6r": Phase="Running", Reason="", readiness=true. Elapsed: 22.047048163s Nov 5 23:30:51.022: INFO: Pod "pod-subpath-test-configmap-mb6r": Phase="Running", Reason="", readiness=true. Elapsed: 24.052301789s Nov 5 23:30:53.025: INFO: Pod "pod-subpath-test-configmap-mb6r": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.055734744s STEP: Saw pod success Nov 5 23:30:53.025: INFO: Pod "pod-subpath-test-configmap-mb6r" satisfied condition "Succeeded or Failed" Nov 5 23:30:53.028: INFO: Trying to get logs from node node2 pod pod-subpath-test-configmap-mb6r container test-container-subpath-configmap-mb6r: STEP: delete the pod Nov 5 23:30:53.039: INFO: Waiting for pod pod-subpath-test-configmap-mb6r to disappear Nov 5 23:30:53.041: INFO: Pod pod-subpath-test-configmap-mb6r no longer exists STEP: Deleting pod pod-subpath-test-configmap-mb6r Nov 5 23:30:53.041: INFO: Deleting pod "pod-subpath-test-configmap-mb6r" in namespace "subpath-8059" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:30:53.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8059" for this suite. • [SLOW TEST:26.118 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":-1,"completed":26,"skipped":552,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:30:29.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Nov 5 23:30:29.089: INFO: >>> kubeConfig: /root/.kube/config Nov 5 23:30:37.102: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:30:55.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1212" for this suite. • [SLOW TEST:26.764 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":-1,"completed":34,"skipped":500,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:30:48.491: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod Nov 5 23:30:48.516: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:30:58.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8306" for this suite. • [SLOW TEST:9.891 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should invoke init containers on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":-1,"completed":35,"skipped":678,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:30:15.313: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: waiting for pod running STEP: creating a file in subpath Nov 5 23:30:21.359: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-7485 PodName:var-expansion-1a3ea181-486a-42bf-a7d5-a9a8c29e01c1 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 5 23:30:21.359: INFO: >>> kubeConfig: /root/.kube/config STEP: test for file in mounted path Nov 5 23:30:21.470: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-7485 PodName:var-expansion-1a3ea181-486a-42bf-a7d5-a9a8c29e01c1 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 5 23:30:21.470: INFO: >>> kubeConfig: /root/.kube/config STEP: updating the annotation value Nov 5 23:30:22.065: INFO: Successfully updated pod "var-expansion-1a3ea181-486a-42bf-a7d5-a9a8c29e01c1" STEP: waiting for annotated pod running STEP: deleting the pod gracefully Nov 5 23:30:22.068: INFO: Deleting pod "var-expansion-1a3ea181-486a-42bf-a7d5-a9a8c29e01c1" in namespace "var-expansion-7485" Nov 5 23:30:22.072: INFO: Wait up to 5m0s for pod "var-expansion-1a3ea181-486a-42bf-a7d5-a9a8c29e01c1" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:31:00.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7485" for this suite. • [SLOW TEST:44.774 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should succeed in writing subpaths in container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","total":-1,"completed":34,"skipped":537,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:31:00.093: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-9ed59db2-2828-4c86-9405-5b57cb008a0d STEP: Creating a pod to test consume configMaps Nov 5 23:31:00.127: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2d2c8b99-751f-4248-81ab-e6b394d7b15f" in namespace "projected-1697" to be "Succeeded or Failed" Nov 5 23:31:00.132: INFO: Pod "pod-projected-configmaps-2d2c8b99-751f-4248-81ab-e6b394d7b15f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.718511ms Nov 5 23:31:02.137: INFO: Pod "pod-projected-configmaps-2d2c8b99-751f-4248-81ab-e6b394d7b15f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009984506s Nov 5 23:31:04.141: INFO: Pod "pod-projected-configmaps-2d2c8b99-751f-4248-81ab-e6b394d7b15f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013850909s STEP: Saw pod success Nov 5 23:31:04.141: INFO: Pod "pod-projected-configmaps-2d2c8b99-751f-4248-81ab-e6b394d7b15f" satisfied condition "Succeeded or Failed" Nov 5 23:31:04.143: INFO: Trying to get logs from node node1 pod pod-projected-configmaps-2d2c8b99-751f-4248-81ab-e6b394d7b15f container agnhost-container: STEP: delete the pod Nov 5 23:31:04.156: INFO: Waiting for pod pod-projected-configmaps-2d2c8b99-751f-4248-81ab-e6b394d7b15f to disappear Nov 5 23:31:04.158: INFO: Pod pod-projected-configmaps-2d2c8b99-751f-4248-81ab-e6b394d7b15f no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:31:04.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1697" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":35,"skipped":539,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:31:04.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create deployment with httpd image Nov 5 23:31:04.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8440 create -f -' Nov 5 23:31:04.637: INFO: stderr: "" Nov 5 23:31:04.637: INFO: stdout: "deployment.apps/httpd-deployment created\n" STEP: verify diff finds difference between live and declared image Nov 5 23:31:04.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8440 diff -f -' Nov 5 23:31:04.981: INFO: rc: 1 Nov 5 23:31:04.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8440 delete -f -' Nov 5 23:31:05.130: INFO: stderr: "" Nov 5 23:31:05.130: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:31:05.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8440" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":-1,"completed":36,"skipped":572,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:30:18.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-6439 Nov 5 23:30:18.860: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:30:20.864: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) Nov 5 23:30:20.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Nov 5 23:30:21.156: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" Nov 5 23:30:21.156: INFO: stdout: "iptables" Nov 5 23:30:21.156: INFO: proxyMode: iptables Nov 5 23:30:21.164: INFO: Waiting for pod kube-proxy-mode-detector to disappear Nov 5 23:30:21.165: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-clusterip-timeout in namespace services-6439 STEP: creating replication controller affinity-clusterip-timeout in namespace services-6439 I1105 23:30:21.175246 39 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-6439, replica count: 3 I1105 23:30:24.226774 39 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1105 23:30:27.227728 39 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 5 23:30:27.231: INFO: Creating new exec pod Nov 5 23:30:32.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinitydtlzt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80' Nov 5 23:30:32.513: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-timeout 80\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\n" Nov 5 23:30:32.513: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Nov 5 23:30:32.513: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinitydtlzt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.56.96 80' Nov 5 23:30:32.761: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.56.96 80\nConnection to 10.233.56.96 80 port [tcp/http] succeeded!\n" Nov 5 23:30:32.761: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Nov 5 23:30:32.761: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinitydtlzt -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.233.56.96:80/ ; done' Nov 5 23:30:33.062: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.56.96:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.56.96:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.56.96:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.56.96:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.56.96:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.56.96:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.56.96:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.56.96:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.56.96:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.56.96:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.56.96:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.56.96:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.56.96:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.56.96:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.56.96:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.56.96:80/\n" Nov 5 23:30:33.062: INFO: stdout: "\naffinity-clusterip-timeout-v52rf\naffinity-clusterip-timeout-v52rf\naffinity-clusterip-timeout-v52rf\naffinity-clusterip-timeout-v52rf\naffinity-clusterip-timeout-v52rf\naffinity-clusterip-timeout-v52rf\naffinity-clusterip-timeout-v52rf\naffinity-clusterip-timeout-v52rf\naffinity-clusterip-timeout-v52rf\naffinity-clusterip-timeout-v52rf\naffinity-clusterip-timeout-v52rf\naffinity-clusterip-timeout-v52rf\naffinity-clusterip-timeout-v52rf\naffinity-clusterip-timeout-v52rf\naffinity-clusterip-timeout-v52rf\naffinity-clusterip-timeout-v52rf" Nov 5 23:30:33.062: INFO: Received response from host: affinity-clusterip-timeout-v52rf Nov 5 23:30:33.062: INFO: Received response from host: affinity-clusterip-timeout-v52rf Nov 5 23:30:33.062: INFO: Received response from host: affinity-clusterip-timeout-v52rf Nov 5 23:30:33.062: INFO: Received response from host: affinity-clusterip-timeout-v52rf Nov 5 23:30:33.062: INFO: Received response from host: affinity-clusterip-timeout-v52rf Nov 5 23:30:33.062: INFO: Received response from host: affinity-clusterip-timeout-v52rf Nov 5 23:30:33.062: INFO: Received response from host: affinity-clusterip-timeout-v52rf Nov 5 23:30:33.062: INFO: Received response from host: affinity-clusterip-timeout-v52rf Nov 5 23:30:33.062: INFO: Received response from host: affinity-clusterip-timeout-v52rf Nov 5 23:30:33.062: INFO: Received response from host: affinity-clusterip-timeout-v52rf Nov 5 23:30:33.062: INFO: Received response from host: affinity-clusterip-timeout-v52rf Nov 5 23:30:33.062: INFO: Received response from host: affinity-clusterip-timeout-v52rf Nov 5 23:30:33.062: INFO: Received response from host: affinity-clusterip-timeout-v52rf Nov 5 23:30:33.062: INFO: Received response from host: affinity-clusterip-timeout-v52rf Nov 5 23:30:33.062: INFO: Received response from host: affinity-clusterip-timeout-v52rf Nov 5 23:30:33.062: INFO: Received response from host: affinity-clusterip-timeout-v52rf Nov 5 23:30:33.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinitydtlzt -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.233.56.96:80/' Nov 5 23:30:33.311: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.233.56.96:80/\n" Nov 5 23:30:33.311: INFO: stdout: "affinity-clusterip-timeout-v52rf" Nov 5 23:30:53.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinitydtlzt -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.233.56.96:80/' Nov 5 23:30:53.556: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.233.56.96:80/\n" Nov 5 23:30:53.556: INFO: stdout: "affinity-clusterip-timeout-mdrk8" Nov 5 23:30:53.556: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-6439, will wait for the garbage collector to delete the pods Nov 5 23:30:53.617: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 3.353666ms Nov 5 23:30:53.718: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 100.900754ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:31:08.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6439" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:49.915 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":17,"skipped":396,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:30:52.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:31:09.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1187" for this suite. • [SLOW TEST:16.108 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":-1,"completed":27,"skipped":442,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:30:55.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Nov 5 23:30:56.396: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Nov 5 23:30:58.404: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751856, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751856, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751856, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751856, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 5 23:31:01.414: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 5 23:31:01.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-7053-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:31:09.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9308" for this suite. STEP: Destroying namespace "webhook-9308-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:13.698 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":35,"skipped":524,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:28:29.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod with failed condition STEP: updating the pod Nov 5 23:30:30.128: INFO: Successfully updated pod "var-expansion-6088554d-bb50-477f-92c8-dc7b74a322eb" STEP: waiting for pod running STEP: deleting the pod gracefully Nov 5 23:30:32.136: INFO: Deleting pod "var-expansion-6088554d-bb50-477f-92c8-dc7b74a322eb" in namespace "var-expansion-9170" Nov 5 23:30:32.140: INFO: Wait up to 5m0s for pod "var-expansion-6088554d-bb50-477f-92c8-dc7b74a322eb" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:31:10.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9170" for this suite. • [SLOW TEST:160.581 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]","total":-1,"completed":12,"skipped":149,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:31:08.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-42b3f81b-dbe6-4776-be3c-fa483108732f STEP: Creating a pod to test consume secrets Nov 5 23:31:08.796: INFO: Waiting up to 5m0s for pod "pod-secrets-6e93279c-5adc-467f-940f-fda75059b4eb" in namespace "secrets-496" to be "Succeeded or Failed" Nov 5 23:31:08.800: INFO: Pod "pod-secrets-6e93279c-5adc-467f-940f-fda75059b4eb": Phase="Pending", Reason="", readiness=false. Elapsed: 3.606365ms Nov 5 23:31:10.804: INFO: Pod "pod-secrets-6e93279c-5adc-467f-940f-fda75059b4eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007932358s Nov 5 23:31:12.809: INFO: Pod "pod-secrets-6e93279c-5adc-467f-940f-fda75059b4eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01273771s STEP: Saw pod success Nov 5 23:31:12.809: INFO: Pod "pod-secrets-6e93279c-5adc-467f-940f-fda75059b4eb" satisfied condition "Succeeded or Failed" Nov 5 23:31:12.812: INFO: Trying to get logs from node node1 pod pod-secrets-6e93279c-5adc-467f-940f-fda75059b4eb container secret-volume-test: STEP: delete the pod Nov 5 23:31:12.826: INFO: Waiting for pod pod-secrets-6e93279c-5adc-467f-940f-fda75059b4eb to disappear Nov 5 23:31:12.828: INFO: Pod pod-secrets-6e93279c-5adc-467f-940f-fda75059b4eb no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:31:12.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-496" for this suite. STEP: Destroying namespace "secret-namespace-9183" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":398,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:31:05.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication Nov 5 23:31:05.402: INFO: role binding webhook-auth-reader already exists STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Nov 5 23:31:05.414: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Nov 5 23:31:07.421: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751865, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751865, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751865, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751865, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 5 23:31:09.428: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751865, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751865, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751865, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751865, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 5 23:31:12.434: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:31:13.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6419" for this suite. STEP: Destroying namespace "webhook-6419-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.387 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":37,"skipped":600,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:31:10.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-map-f5ce9731-7169-454d-888e-fad01f9b7926 STEP: Creating a pod to test consume secrets Nov 5 23:31:10.231: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-77d4c514-0d53-44f7-b8e0-8c1e796a971d" in namespace "projected-1626" to be "Succeeded or Failed" Nov 5 23:31:10.233: INFO: Pod "pod-projected-secrets-77d4c514-0d53-44f7-b8e0-8c1e796a971d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.350029ms Nov 5 23:31:12.237: INFO: Pod "pod-projected-secrets-77d4c514-0d53-44f7-b8e0-8c1e796a971d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006069343s Nov 5 23:31:14.240: INFO: Pod "pod-projected-secrets-77d4c514-0d53-44f7-b8e0-8c1e796a971d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009279639s STEP: Saw pod success Nov 5 23:31:14.240: INFO: Pod "pod-projected-secrets-77d4c514-0d53-44f7-b8e0-8c1e796a971d" satisfied condition "Succeeded or Failed" Nov 5 23:31:14.242: INFO: Trying to get logs from node node2 pod pod-projected-secrets-77d4c514-0d53-44f7-b8e0-8c1e796a971d container projected-secret-volume-test: STEP: delete the pod Nov 5 23:31:14.255: INFO: Waiting for pod pod-projected-secrets-77d4c514-0d53-44f7-b8e0-8c1e796a971d to disappear Nov 5 23:31:14.257: INFO: Pod pod-projected-secrets-77d4c514-0d53-44f7-b8e0-8c1e796a971d no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:31:14.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1626" for this suite. • ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:31:09.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 5 23:31:09.070: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:31:15.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1649" for this suite. • [SLOW TEST:6.046 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 creating/deleting custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":-1,"completed":28,"skipped":447,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:31:09.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8219.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8219.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Nov 5 23:31:15.945: INFO: DNS probes using dns-8219/dns-test-4bea9d48-8f26-4d2c-96ab-468b6fd49433 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:31:15.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8219" for this suite. • [SLOW TEST:6.082 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":-1,"completed":36,"skipped":678,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:31:13.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Nov 5 23:31:13.640: INFO: Waiting up to 5m0s for pod "downwardapi-volume-15ab6f08-dc24-4f3b-9768-4d4fe8bcd379" in namespace "projected-5705" to be "Succeeded or Failed" Nov 5 23:31:13.643: INFO: Pod "downwardapi-volume-15ab6f08-dc24-4f3b-9768-4d4fe8bcd379": Phase="Pending", Reason="", readiness=false. Elapsed: 2.793684ms Nov 5 23:31:15.645: INFO: Pod "downwardapi-volume-15ab6f08-dc24-4f3b-9768-4d4fe8bcd379": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005692895s Nov 5 23:31:17.649: INFO: Pod "downwardapi-volume-15ab6f08-dc24-4f3b-9768-4d4fe8bcd379": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009484011s Nov 5 23:31:19.653: INFO: Pod "downwardapi-volume-15ab6f08-dc24-4f3b-9768-4d4fe8bcd379": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013390785s STEP: Saw pod success Nov 5 23:31:19.653: INFO: Pod "downwardapi-volume-15ab6f08-dc24-4f3b-9768-4d4fe8bcd379" satisfied condition "Succeeded or Failed" Nov 5 23:31:19.655: INFO: Trying to get logs from node node2 pod downwardapi-volume-15ab6f08-dc24-4f3b-9768-4d4fe8bcd379 container client-container: STEP: delete the pod Nov 5 23:31:20.021: INFO: Waiting for pod downwardapi-volume-15ab6f08-dc24-4f3b-9768-4d4fe8bcd379 to disappear Nov 5 23:31:20.023: INFO: Pod downwardapi-volume-15ab6f08-dc24-4f3b-9768-4d4fe8bcd379 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:31:20.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5705" for this suite. • [SLOW TEST:6.427 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:31:15.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-277bcf94-9fd6-4b48-86d4-09891ff27f6a STEP: Creating a pod to test consume configMaps Nov 5 23:31:15.142: INFO: Waiting up to 5m0s for pod "pod-configmaps-0d4ae890-73e7-4c3c-a0ec-77a868b762df" in namespace "configmap-6148" to be "Succeeded or Failed" Nov 5 23:31:15.144: INFO: Pod "pod-configmaps-0d4ae890-73e7-4c3c-a0ec-77a868b762df": Phase="Pending", Reason="", readiness=false. Elapsed: 1.997878ms Nov 5 23:31:17.148: INFO: Pod "pod-configmaps-0d4ae890-73e7-4c3c-a0ec-77a868b762df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005359571s Nov 5 23:31:19.150: INFO: Pod "pod-configmaps-0d4ae890-73e7-4c3c-a0ec-77a868b762df": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008065647s Nov 5 23:31:21.156: INFO: Pod "pod-configmaps-0d4ae890-73e7-4c3c-a0ec-77a868b762df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013606419s STEP: Saw pod success Nov 5 23:31:21.156: INFO: Pod "pod-configmaps-0d4ae890-73e7-4c3c-a0ec-77a868b762df" satisfied condition "Succeeded or Failed" Nov 5 23:31:21.158: INFO: Trying to get logs from node node2 pod pod-configmaps-0d4ae890-73e7-4c3c-a0ec-77a868b762df container agnhost-container: STEP: delete the pod Nov 5 23:31:21.172: INFO: Waiting for pod pod-configmaps-0d4ae890-73e7-4c3c-a0ec-77a868b762df to disappear Nov 5 23:31:21.174: INFO: Pod pod-configmaps-0d4ae890-73e7-4c3c-a0ec-77a868b762df no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:31:21.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6148" for this suite. • [SLOW TEST:6.075 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":453,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:30:53.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Performing setup for networking test in namespace pod-network-test-8877 STEP: creating a selector STEP: Creating the service pods in kubernetes Nov 5 23:30:53.084: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 5 23:30:53.116: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:30:55.119: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:30:57.119: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 5 23:30:59.120: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 5 23:31:01.119: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 5 23:31:03.120: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 5 23:31:05.119: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 5 23:31:07.121: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 5 23:31:09.120: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 5 23:31:11.118: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 5 23:31:13.119: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 5 23:31:15.119: INFO: The status of Pod netserver-0 is Running (Ready = true) Nov 5 23:31:15.125: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Nov 5 23:31:21.147: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Nov 5 23:31:21.147: INFO: Breadth first check of 10.244.3.52 on host 10.10.190.207... Nov 5 23:31:21.150: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.57:9080/dial?request=hostname&protocol=udp&host=10.244.3.52&port=8081&tries=1'] Namespace:pod-network-test-8877 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 5 23:31:21.150: INFO: >>> kubeConfig: /root/.kube/config Nov 5 23:31:21.400: INFO: Waiting for responses: map[] Nov 5 23:31:21.400: INFO: reached 10.244.3.52 after 0/1 tries Nov 5 23:31:21.400: INFO: Breadth first check of 10.244.4.122 on host 10.10.190.208... Nov 5 23:31:21.402: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.57:9080/dial?request=hostname&protocol=udp&host=10.244.4.122&port=8081&tries=1'] Namespace:pod-network-test-8877 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 5 23:31:21.402: INFO: >>> kubeConfig: /root/.kube/config Nov 5 23:31:21.527: INFO: Waiting for responses: map[] Nov 5 23:31:21.527: INFO: reached 10.244.4.122 after 0/1 tries Nov 5 23:31:21.527: INFO: Going to retry 0 out of 2 pods.... [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:31:21.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8877" for this suite. • [SLOW TEST:28.474 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":556,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:31:15.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 5 23:31:16.003: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-cfa0dfa3-87e9-4821-9bdc-c47c7b8d15d6" in namespace "security-context-test-4535" to be "Succeeded or Failed" Nov 5 23:31:16.005: INFO: Pod "alpine-nnp-false-cfa0dfa3-87e9-4821-9bdc-c47c7b8d15d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.457612ms Nov 5 23:31:18.008: INFO: Pod "alpine-nnp-false-cfa0dfa3-87e9-4821-9bdc-c47c7b8d15d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004725758s Nov 5 23:31:20.011: INFO: Pod "alpine-nnp-false-cfa0dfa3-87e9-4821-9bdc-c47c7b8d15d6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.00810812s Nov 5 23:31:22.014: INFO: Pod "alpine-nnp-false-cfa0dfa3-87e9-4821-9bdc-c47c7b8d15d6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.010923579s Nov 5 23:31:24.017: INFO: Pod "alpine-nnp-false-cfa0dfa3-87e9-4821-9bdc-c47c7b8d15d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.014460081s Nov 5 23:31:24.017: INFO: Pod "alpine-nnp-false-cfa0dfa3-87e9-4821-9bdc-c47c7b8d15d6" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:31:24.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4535" for this suite. • [SLOW TEST:8.059 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when creating containers with AllowPrivilegeEscalation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":37,"skipped":684,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:31:24.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Nov 5 23:31:24.099: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-350 ba991893-6912-4a98-a78d-68774c7ea4de 51710 0 2021-11-05 23:31:24 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2021-11-05 23:31:24 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Nov 5 23:31:24.099: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-350 ba991893-6912-4a98-a78d-68774c7ea4de 51711 0 2021-11-05 23:31:24 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2021-11-05 23:31:24 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:31:24.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-350" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":-1,"completed":38,"skipped":696,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":38,"skipped":609,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} [BeforeEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:31:20.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Nov 5 23:31:20.060: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Nov 5 23:31:20.064: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Nov 5 23:31:20.064: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Nov 5 23:31:20.082: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Nov 5 23:31:20.082: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Nov 5 23:31:20.095: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Nov 5 23:31:20.095: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Nov 5 23:31:27.141: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:31:27.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-8671" for this suite. • [SLOW TEST:7.131 seconds] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":171,"failed":0} [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:31:14.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating all guestbook components Nov 5 23:31:14.288: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-replica labels: app: agnhost role: replica tier: backend spec: ports: - port: 6379 selector: app: agnhost role: replica tier: backend Nov 5 23:31:14.288: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6041 create -f -' Nov 5 23:31:14.675: INFO: stderr: "" Nov 5 23:31:14.675: INFO: stdout: "service/agnhost-replica created\n" Nov 5 23:31:14.675: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-primary labels: app: agnhost role: primary tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: primary tier: backend Nov 5 23:31:14.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6041 create -f -' Nov 5 23:31:15.016: INFO: stderr: "" Nov 5 23:31:15.016: INFO: stdout: "service/agnhost-primary created\n" Nov 5 23:31:15.016: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Nov 5 23:31:15.016: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6041 create -f -' Nov 5 23:31:15.378: INFO: stderr: "" Nov 5 23:31:15.378: INFO: stdout: "service/frontend created\n" Nov 5 23:31:15.379: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: k8s.gcr.io/e2e-test-images/agnhost:2.32 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Nov 5 23:31:15.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6041 create -f -' Nov 5 23:31:15.703: INFO: stderr: "" Nov 5 23:31:15.703: INFO: stdout: "deployment.apps/frontend created\n" Nov 5 23:31:15.703: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-primary spec: replicas: 1 selector: matchLabels: app: agnhost role: primary tier: backend template: metadata: labels: app: agnhost role: primary tier: backend spec: containers: - name: primary image: k8s.gcr.io/e2e-test-images/agnhost:2.32 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Nov 5 23:31:15.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6041 create -f -' Nov 5 23:31:16.050: INFO: stderr: "" Nov 5 23:31:16.050: INFO: stdout: "deployment.apps/agnhost-primary created\n" Nov 5 23:31:16.050: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-replica spec: replicas: 2 selector: matchLabels: app: agnhost role: replica tier: backend template: metadata: labels: app: agnhost role: replica tier: backend spec: containers: - name: replica image: k8s.gcr.io/e2e-test-images/agnhost:2.32 args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Nov 5 23:31:16.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6041 create -f -' Nov 5 23:31:16.403: INFO: stderr: "" Nov 5 23:31:16.403: INFO: stdout: "deployment.apps/agnhost-replica created\n" STEP: validating guestbook app Nov 5 23:31:16.403: INFO: Waiting for all frontend pods to be Running. Nov 5 23:31:26.457: INFO: Waiting for frontend to serve content. Nov 5 23:31:26.464: INFO: Trying to add a new entry to the guestbook. Nov 5 23:31:26.472: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Nov 5 23:31:26.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6041 delete --grace-period=0 --force -f -' Nov 5 23:31:26.620: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Nov 5 23:31:26.620: INFO: stdout: "service \"agnhost-replica\" force deleted\n" STEP: using delete to clean up resources Nov 5 23:31:26.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6041 delete --grace-period=0 --force -f -' Nov 5 23:31:26.767: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Nov 5 23:31:26.767: INFO: stdout: "service \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Nov 5 23:31:26.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6041 delete --grace-period=0 --force -f -' Nov 5 23:31:26.906: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Nov 5 23:31:26.906: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Nov 5 23:31:26.906: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6041 delete --grace-period=0 --force -f -' Nov 5 23:31:27.034: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Nov 5 23:31:27.034: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Nov 5 23:31:27.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6041 delete --grace-period=0 --force -f -' Nov 5 23:31:27.172: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Nov 5 23:31:27.172: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Nov 5 23:31:27.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6041 delete --grace-period=0 --force -f -' Nov 5 23:31:27.287: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Nov 5 23:31:27.287: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:31:27.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6041" for this suite. • [SLOW TEST:13.028 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:336 should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":-1,"completed":14,"skipped":171,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:31:21.686: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test substitution in container's command Nov 5 23:31:21.719: INFO: Waiting up to 5m0s for pod "var-expansion-e3898cef-0125-45b4-9d49-5d9ed812444b" in namespace "var-expansion-9914" to be "Succeeded or Failed" Nov 5 23:31:21.723: INFO: Pod "var-expansion-e3898cef-0125-45b4-9d49-5d9ed812444b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.338437ms Nov 5 23:31:23.726: INFO: Pod "var-expansion-e3898cef-0125-45b4-9d49-5d9ed812444b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006758598s Nov 5 23:31:25.729: INFO: Pod "var-expansion-e3898cef-0125-45b4-9d49-5d9ed812444b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009789374s Nov 5 23:31:27.734: INFO: Pod "var-expansion-e3898cef-0125-45b4-9d49-5d9ed812444b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014455069s STEP: Saw pod success Nov 5 23:31:27.734: INFO: Pod "var-expansion-e3898cef-0125-45b4-9d49-5d9ed812444b" satisfied condition "Succeeded or Failed" Nov 5 23:31:27.736: INFO: Trying to get logs from node node2 pod var-expansion-e3898cef-0125-45b4-9d49-5d9ed812444b container dapi-container: STEP: delete the pod Nov 5 23:31:27.753: INFO: Waiting for pod var-expansion-e3898cef-0125-45b4-9d49-5d9ed812444b to disappear Nov 5 23:31:27.755: INFO: Pod var-expansion-e3898cef-0125-45b4-9d49-5d9ed812444b no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:31:27.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9914" for this suite. • [SLOW TEST:6.078 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":650,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:31:24.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars Nov 5 23:31:24.179: INFO: Waiting up to 5m0s for pod "downward-api-0bf03de1-8903-4ec9-8f78-36e11a5e9527" in namespace "downward-api-6982" to be "Succeeded or Failed" Nov 5 23:31:24.183: INFO: Pod "downward-api-0bf03de1-8903-4ec9-8f78-36e11a5e9527": Phase="Pending", Reason="", readiness=false. Elapsed: 4.351392ms Nov 5 23:31:26.186: INFO: Pod "downward-api-0bf03de1-8903-4ec9-8f78-36e11a5e9527": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007678684s Nov 5 23:31:28.190: INFO: Pod "downward-api-0bf03de1-8903-4ec9-8f78-36e11a5e9527": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011116151s STEP: Saw pod success Nov 5 23:31:28.190: INFO: Pod "downward-api-0bf03de1-8903-4ec9-8f78-36e11a5e9527" satisfied condition "Succeeded or Failed" Nov 5 23:31:28.193: INFO: Trying to get logs from node node1 pod downward-api-0bf03de1-8903-4ec9-8f78-36e11a5e9527 container dapi-container: STEP: delete the pod Nov 5 23:31:28.325: INFO: Waiting for pod downward-api-0bf03de1-8903-4ec9-8f78-36e11a5e9527 to disappear Nov 5 23:31:28.327: INFO: Pod downward-api-0bf03de1-8903-4ec9-8f78-36e11a5e9527 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:31:28.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6982" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":-1,"completed":39,"skipped":717,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:30:29.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-6404 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating stateful set ss in namespace statefulset-6404 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6404 Nov 5 23:30:29.351: INFO: Found 0 stateful pods, waiting for 1 Nov 5 23:30:39.356: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Nov 5 23:30:39.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-6404 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Nov 5 23:30:39.625: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Nov 5 23:30:39.625: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Nov 5 23:30:39.625: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Nov 5 23:30:39.628: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Nov 5 23:30:49.632: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Nov 5 23:30:49.632: INFO: Waiting for statefulset status.replicas updated to 0 Nov 5 23:30:49.643: INFO: POD NODE PHASE GRACE CONDITIONS Nov 5 23:30:49.643: INFO: ss-0 node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:30:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:30:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:30:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:30:29 +0000 UTC }] Nov 5 23:30:49.643: INFO: Nov 5 23:30:49.643: INFO: StatefulSet ss has not reached scale 3, at 1 Nov 5 23:30:50.647: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.997038113s Nov 5 23:30:51.650: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.993592566s Nov 5 23:30:52.653: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.990400779s Nov 5 23:30:53.657: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.986454118s Nov 5 23:30:54.661: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.983418039s Nov 5 23:30:55.665: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.979511006s Nov 5 23:30:56.670: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.974500878s Nov 5 23:30:57.675: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.96906533s Nov 5 23:30:58.680: INFO: Verifying statefulset ss doesn't scale past 3 for another 964.242221ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6404 Nov 5 23:30:59.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-6404 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 5 23:30:59.925: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Nov 5 23:30:59.925: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Nov 5 23:30:59.925: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Nov 5 23:30:59.925: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-6404 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 5 23:31:00.494: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Nov 5 23:31:00.494: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Nov 5 23:31:00.494: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Nov 5 23:31:00.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-6404 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 5 23:31:00.738: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Nov 5 23:31:00.738: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Nov 5 23:31:00.738: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Nov 5 23:31:00.742: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 5 23:31:10.745: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Nov 5 23:31:10.745: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Nov 5 23:31:10.745: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Nov 5 23:31:10.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-6404 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Nov 5 23:31:11.020: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Nov 5 23:31:11.021: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Nov 5 23:31:11.021: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Nov 5 23:31:11.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-6404 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Nov 5 23:31:11.354: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Nov 5 23:31:11.354: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Nov 5 23:31:11.354: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Nov 5 23:31:11.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-6404 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Nov 5 23:31:11.822: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Nov 5 23:31:11.822: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Nov 5 23:31:11.822: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Nov 5 23:31:11.822: INFO: Waiting for statefulset status.replicas updated to 0 Nov 5 23:31:11.824: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Nov 5 23:31:21.831: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Nov 5 23:31:21.831: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Nov 5 23:31:21.831: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Nov 5 23:31:21.841: INFO: POD NODE PHASE GRACE CONDITIONS Nov 5 23:31:21.841: INFO: ss-0 node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:30:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:31:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:31:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:30:29 +0000 UTC }] Nov 5 23:31:21.841: INFO: ss-1 node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:30:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:31:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:31:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:30:49 +0000 UTC }] Nov 5 23:31:21.842: INFO: ss-2 node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:30:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:31:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:31:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:30:49 +0000 UTC }] Nov 5 23:31:21.842: INFO: Nov 5 23:31:21.842: INFO: StatefulSet ss has not reached scale 0, at 3 Nov 5 23:31:22.847: INFO: POD NODE PHASE GRACE CONDITIONS Nov 5 23:31:22.847: INFO: ss-0 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:30:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:31:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:31:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:30:29 +0000 UTC }] Nov 5 23:31:22.847: INFO: ss-1 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:30:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:31:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:31:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:30:49 +0000 UTC }] Nov 5 23:31:22.847: INFO: ss-2 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:30:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:31:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:31:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:30:49 +0000 UTC }] Nov 5 23:31:22.847: INFO: Nov 5 23:31:22.847: INFO: StatefulSet ss has not reached scale 0, at 3 Nov 5 23:31:23.852: INFO: POD NODE PHASE GRACE CONDITIONS Nov 5 23:31:23.852: INFO: ss-0 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:30:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:31:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:31:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:30:29 +0000 UTC }] Nov 5 23:31:23.852: INFO: ss-1 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:30:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:31:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:31:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:30:49 +0000 UTC }] Nov 5 23:31:23.852: INFO: ss-2 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:30:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:31:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:31:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:30:49 +0000 UTC }] Nov 5 23:31:23.852: INFO: Nov 5 23:31:23.852: INFO: StatefulSet ss has not reached scale 0, at 3 Nov 5 23:31:24.856: INFO: POD NODE PHASE GRACE CONDITIONS Nov 5 23:31:24.857: INFO: ss-0 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:30:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:31:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:31:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:30:29 +0000 UTC }] Nov 5 23:31:24.857: INFO: ss-1 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:30:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:31:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:31:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:30:49 +0000 UTC }] Nov 5 23:31:24.857: INFO: ss-2 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:30:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:31:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:31:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:30:49 +0000 UTC }] Nov 5 23:31:24.857: INFO: Nov 5 23:31:24.857: INFO: StatefulSet ss has not reached scale 0, at 3 Nov 5 23:31:25.863: INFO: POD NODE PHASE GRACE CONDITIONS Nov 5 23:31:25.863: INFO: ss-0 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:30:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:31:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:31:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:30:29 +0000 UTC }] Nov 5 23:31:25.863: INFO: ss-1 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:30:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:31:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:31:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:30:49 +0000 UTC }] Nov 5 23:31:25.863: INFO: ss-2 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:30:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:31:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:31:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:30:49 +0000 UTC }] Nov 5 23:31:25.863: INFO: Nov 5 23:31:25.863: INFO: StatefulSet ss has not reached scale 0, at 3 Nov 5 23:31:26.866: INFO: POD NODE PHASE GRACE CONDITIONS Nov 5 23:31:26.866: INFO: ss-0 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:30:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:31:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:31:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:30:29 +0000 UTC }] Nov 5 23:31:26.867: INFO: ss-1 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:30:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:31:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:31:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:30:49 +0000 UTC }] Nov 5 23:31:26.867: INFO: ss-2 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:30:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:31:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:31:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:30:49 +0000 UTC }] Nov 5 23:31:26.867: INFO: Nov 5 23:31:26.867: INFO: StatefulSet ss has not reached scale 0, at 3 Nov 5 23:31:27.869: INFO: POD NODE PHASE GRACE CONDITIONS Nov 5 23:31:27.869: INFO: ss-0 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:30:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:31:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:31:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:30:29 +0000 UTC }] Nov 5 23:31:27.870: INFO: ss-1 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:30:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:31:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:31:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-05 23:30:49 +0000 UTC }] Nov 5 23:31:27.870: INFO: Nov 5 23:31:27.870: INFO: StatefulSet ss has not reached scale 0, at 2 Nov 5 23:31:28.872: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.968249905s Nov 5 23:31:29.876: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.965204417s Nov 5 23:31:30.880: INFO: Verifying statefulset ss doesn't scale past 0 for another 960.444102ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6404 Nov 5 23:31:31.886: INFO: Scaling statefulset ss to 0 Nov 5 23:31:31.894: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Nov 5 23:31:31.896: INFO: Deleting all statefulset in ns statefulset-6404 Nov 5 23:31:31.898: INFO: Scaling statefulset ss to 0 Nov 5 23:31:31.905: INFO: Waiting for statefulset status.replicas updated to 0 Nov 5 23:31:31.907: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:31:31.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6404" for this suite. • [SLOW TEST:62.603 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":-1,"completed":26,"skipped":437,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} S ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:27:28.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod test-webserver-e71c8083-eaeb-4cb0-956a-7b0efb4178ab in namespace container-probe-3583 Nov 5 23:27:33.008: INFO: Started pod test-webserver-e71c8083-eaeb-4cb0-956a-7b0efb4178ab in namespace container-probe-3583 STEP: checking the pod's current state and verifying that restartCount is present Nov 5 23:27:33.010: INFO: Initial restart count of pod test-webserver-e71c8083-eaeb-4cb0-956a-7b0efb4178ab is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:31:33.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3583" for this suite. • [SLOW TEST:244.545 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":371,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:31:31.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Nov 5 23:31:31.961: INFO: Waiting up to 5m0s for pod "downwardapi-volume-59f87016-8865-440c-bf60-9ee05e4faeac" in namespace "projected-8223" to be "Succeeded or Failed" Nov 5 23:31:31.966: INFO: Pod "downwardapi-volume-59f87016-8865-440c-bf60-9ee05e4faeac": Phase="Pending", Reason="", readiness=false. Elapsed: 5.124921ms Nov 5 23:31:33.969: INFO: Pod "downwardapi-volume-59f87016-8865-440c-bf60-9ee05e4faeac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007791378s Nov 5 23:31:35.974: INFO: Pod "downwardapi-volume-59f87016-8865-440c-bf60-9ee05e4faeac": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012944142s Nov 5 23:31:37.979: INFO: Pod "downwardapi-volume-59f87016-8865-440c-bf60-9ee05e4faeac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017955356s STEP: Saw pod success Nov 5 23:31:37.979: INFO: Pod "downwardapi-volume-59f87016-8865-440c-bf60-9ee05e4faeac" satisfied condition "Succeeded or Failed" Nov 5 23:31:37.982: INFO: Trying to get logs from node node1 pod downwardapi-volume-59f87016-8865-440c-bf60-9ee05e4faeac container client-container: STEP: delete the pod Nov 5 23:31:38.430: INFO: Waiting for pod downwardapi-volume-59f87016-8865-440c-bf60-9ee05e4faeac to disappear Nov 5 23:31:38.432: INFO: Pod downwardapi-volume-59f87016-8865-440c-bf60-9ee05e4faeac no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:31:38.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8223" for this suite. • [SLOW TEST:6.513 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":438,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSS ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":-1,"completed":39,"skipped":609,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:31:27.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0666 on tmpfs Nov 5 23:31:27.199: INFO: Waiting up to 5m0s for pod "pod-0729fe71-d4d3-49e2-823e-5665456735fa" in namespace "emptydir-505" to be "Succeeded or Failed" Nov 5 23:31:27.201: INFO: Pod "pod-0729fe71-d4d3-49e2-823e-5665456735fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.68968ms Nov 5 23:31:29.204: INFO: Pod "pod-0729fe71-d4d3-49e2-823e-5665456735fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005361593s Nov 5 23:31:31.210: INFO: Pod "pod-0729fe71-d4d3-49e2-823e-5665456735fa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010836434s Nov 5 23:31:33.213: INFO: Pod "pod-0729fe71-d4d3-49e2-823e-5665456735fa": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01404831s Nov 5 23:31:35.216: INFO: Pod "pod-0729fe71-d4d3-49e2-823e-5665456735fa": Phase="Pending", Reason="", readiness=false. Elapsed: 8.01749045s Nov 5 23:31:37.221: INFO: Pod "pod-0729fe71-d4d3-49e2-823e-5665456735fa": Phase="Pending", Reason="", readiness=false. Elapsed: 10.022023495s Nov 5 23:31:39.224: INFO: Pod "pod-0729fe71-d4d3-49e2-823e-5665456735fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.025527141s STEP: Saw pod success Nov 5 23:31:39.224: INFO: Pod "pod-0729fe71-d4d3-49e2-823e-5665456735fa" satisfied condition "Succeeded or Failed" Nov 5 23:31:39.227: INFO: Trying to get logs from node node2 pod pod-0729fe71-d4d3-49e2-823e-5665456735fa container test-container: STEP: delete the pod Nov 5 23:31:39.346: INFO: Waiting for pod pod-0729fe71-d4d3-49e2-823e-5665456735fa to disappear Nov 5 23:31:39.349: INFO: Pod pod-0729fe71-d4d3-49e2-823e-5665456735fa no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:31:39.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-505" for this suite. • [SLOW TEST:12.189 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":40,"skipped":609,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:31:39.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir volume type on tmpfs Nov 5 23:31:39.415: INFO: Waiting up to 5m0s for pod "pod-f0b402af-e370-4c48-99b6-7806c2200d9f" in namespace "emptydir-6810" to be "Succeeded or Failed" Nov 5 23:31:39.417: INFO: Pod "pod-f0b402af-e370-4c48-99b6-7806c2200d9f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.576716ms Nov 5 23:31:41.421: INFO: Pod "pod-f0b402af-e370-4c48-99b6-7806c2200d9f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006015338s Nov 5 23:31:43.424: INFO: Pod "pod-f0b402af-e370-4c48-99b6-7806c2200d9f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008897288s STEP: Saw pod success Nov 5 23:31:43.424: INFO: Pod "pod-f0b402af-e370-4c48-99b6-7806c2200d9f" satisfied condition "Succeeded or Failed" Nov 5 23:31:43.426: INFO: Trying to get logs from node node1 pod pod-f0b402af-e370-4c48-99b6-7806c2200d9f container test-container: STEP: delete the pod Nov 5 23:31:43.439: INFO: Waiting for pod pod-f0b402af-e370-4c48-99b6-7806c2200d9f to disappear Nov 5 23:31:43.442: INFO: Pod pod-f0b402af-e370-4c48-99b6-7806c2200d9f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:31:43.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6810" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":41,"skipped":617,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:31:27.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service endpoint-test2 in namespace services-6873 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6873 to expose endpoints map[] Nov 5 23:31:27.343: INFO: successfully validated that service endpoint-test2 in namespace services-6873 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-6873 Nov 5 23:31:27.356: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:31:29.359: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:31:31.359: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:31:33.359: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:31:35.361: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:31:37.361: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:31:39.359: INFO: The status of Pod pod1 is Running (Ready = true) STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6873 to expose endpoints map[pod1:[80]] Nov 5 23:31:39.369: INFO: successfully validated that service endpoint-test2 in namespace services-6873 exposes endpoints map[pod1:[80]] STEP: Creating pod pod2 in namespace services-6873 Nov 5 23:31:39.381: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:31:41.386: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:31:43.385: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:31:45.384: INFO: The status of Pod pod2 is Running (Ready = true) STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6873 to expose endpoints map[pod1:[80] pod2:[80]] Nov 5 23:31:45.400: INFO: successfully validated that service endpoint-test2 in namespace services-6873 exposes endpoints map[pod1:[80] pod2:[80]] STEP: Deleting pod pod1 in namespace services-6873 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6873 to expose endpoints map[pod2:[80]] Nov 5 23:31:45.416: INFO: successfully validated that service endpoint-test2 in namespace services-6873 exposes endpoints map[pod2:[80]] STEP: Deleting pod pod2 in namespace services-6873 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6873 to expose endpoints map[] Nov 5 23:31:45.428: INFO: successfully validated that service endpoint-test2 in namespace services-6873 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:31:45.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6873" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:18.135 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":-1,"completed":15,"skipped":178,"failed":0} SS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:30:58.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod busybox-6cef0e1f-2586-486b-8862-5e8a50fd5faf in namespace container-probe-167 Nov 5 23:31:02.441: INFO: Started pod busybox-6cef0e1f-2586-486b-8862-5e8a50fd5faf in namespace container-probe-167 STEP: checking the pod's current state and verifying that restartCount is present Nov 5 23:31:02.443: INFO: Initial restart count of pod busybox-6cef0e1f-2586-486b-8862-5e8a50fd5faf is 0 Nov 5 23:31:52.545: INFO: Restart count of pod container-probe-167/busybox-6cef0e1f-2586-486b-8862-5e8a50fd5faf is now 1 (50.101580051s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:31:52.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-167" for this suite. • [SLOW TEST:54.159 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":36,"skipped":686,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:31:28.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Nov 5 23:31:28.803: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Nov 5 23:31:30.814: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751888, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751888, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751888, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751888, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 5 23:31:32.818: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751888, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751888, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751888, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751888, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 5 23:31:34.817: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751888, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751888, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751888, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751888, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 5 23:31:36.823: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751888, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751888, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751888, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751888, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 5 23:31:38.817: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751888, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751888, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751888, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751888, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 5 23:31:40.821: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751888, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751888, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751888, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751888, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 5 23:31:43.829: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:31:53.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7568" for this suite. STEP: Destroying namespace "webhook-7568-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:25.572 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:31:52.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Nov 5 23:31:52.641: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d6701686-d8c6-4ed5-b79a-6d35482089fa" in namespace "projected-3032" to be "Succeeded or Failed" Nov 5 23:31:52.647: INFO: Pod "downwardapi-volume-d6701686-d8c6-4ed5-b79a-6d35482089fa": Phase="Pending", Reason="", readiness=false. Elapsed: 6.26777ms Nov 5 23:31:54.653: INFO: Pod "downwardapi-volume-d6701686-d8c6-4ed5-b79a-6d35482089fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011822227s Nov 5 23:31:56.657: INFO: Pod "downwardapi-volume-d6701686-d8c6-4ed5-b79a-6d35482089fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015888174s STEP: Saw pod success Nov 5 23:31:56.657: INFO: Pod "downwardapi-volume-d6701686-d8c6-4ed5-b79a-6d35482089fa" satisfied condition "Succeeded or Failed" Nov 5 23:31:56.659: INFO: Trying to get logs from node node1 pod downwardapi-volume-d6701686-d8c6-4ed5-b79a-6d35482089fa container client-container: STEP: delete the pod Nov 5 23:31:56.672: INFO: Waiting for pod downwardapi-volume-d6701686-d8c6-4ed5-b79a-6d35482089fa to disappear Nov 5 23:31:56.674: INFO: Pod downwardapi-volume-d6701686-d8c6-4ed5-b79a-6d35482089fa no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:31:56.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3032" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":37,"skipped":706,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:31:21.204: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:293 [It] should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a replication controller Nov 5 23:31:21.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6740 create -f -' Nov 5 23:31:21.569: INFO: stderr: "" Nov 5 23:31:21.569: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Nov 5 23:31:21.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6740 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 5 23:31:21.719: INFO: stderr: "" Nov 5 23:31:21.719: INFO: stdout: "update-demo-nautilus-bx92s update-demo-nautilus-kxqfp " Nov 5 23:31:21.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6740 get pods update-demo-nautilus-bx92s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 5 23:31:21.880: INFO: stderr: "" Nov 5 23:31:21.880: INFO: stdout: "" Nov 5 23:31:21.880: INFO: update-demo-nautilus-bx92s is created but not running Nov 5 23:31:26.882: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6740 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 5 23:31:27.053: INFO: stderr: "" Nov 5 23:31:27.053: INFO: stdout: "update-demo-nautilus-bx92s update-demo-nautilus-kxqfp " Nov 5 23:31:27.053: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6740 get pods update-demo-nautilus-bx92s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 5 23:31:27.208: INFO: stderr: "" Nov 5 23:31:27.208: INFO: stdout: "" Nov 5 23:31:27.208: INFO: update-demo-nautilus-bx92s is created but not running Nov 5 23:31:32.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6740 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 5 23:31:32.370: INFO: stderr: "" Nov 5 23:31:32.370: INFO: stdout: "update-demo-nautilus-bx92s update-demo-nautilus-kxqfp " Nov 5 23:31:32.370: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6740 get pods update-demo-nautilus-bx92s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 5 23:31:32.528: INFO: stderr: "" Nov 5 23:31:32.528: INFO: stdout: "true" Nov 5 23:31:32.528: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6740 get pods update-demo-nautilus-bx92s -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Nov 5 23:31:32.699: INFO: stderr: "" Nov 5 23:31:32.699: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Nov 5 23:31:32.699: INFO: validating pod update-demo-nautilus-bx92s Nov 5 23:31:32.703: INFO: got data: { "image": "nautilus.jpg" } Nov 5 23:31:32.703: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Nov 5 23:31:32.703: INFO: update-demo-nautilus-bx92s is verified up and running Nov 5 23:31:32.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6740 get pods update-demo-nautilus-kxqfp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 5 23:31:32.883: INFO: stderr: "" Nov 5 23:31:32.883: INFO: stdout: "true" Nov 5 23:31:32.883: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6740 get pods update-demo-nautilus-kxqfp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Nov 5 23:31:33.034: INFO: stderr: "" Nov 5 23:31:33.034: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Nov 5 23:31:33.034: INFO: validating pod update-demo-nautilus-kxqfp Nov 5 23:31:33.038: INFO: got data: { "image": "nautilus.jpg" } Nov 5 23:31:33.038: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Nov 5 23:31:33.038: INFO: update-demo-nautilus-kxqfp is verified up and running STEP: scaling down the replication controller Nov 5 23:31:33.048: INFO: scanned /root for discovery docs: Nov 5 23:31:33.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6740 scale rc update-demo-nautilus --replicas=1 --timeout=5m' Nov 5 23:31:33.269: INFO: stderr: "" Nov 5 23:31:33.269: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Nov 5 23:31:33.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6740 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 5 23:31:33.448: INFO: stderr: "" Nov 5 23:31:33.448: INFO: stdout: "update-demo-nautilus-bx92s update-demo-nautilus-kxqfp " STEP: Replicas for name=update-demo: expected=1 actual=2 Nov 5 23:31:38.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6740 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 5 23:31:38.618: INFO: stderr: "" Nov 5 23:31:38.618: INFO: stdout: "update-demo-nautilus-bx92s update-demo-nautilus-kxqfp " STEP: Replicas for name=update-demo: expected=1 actual=2 Nov 5 23:31:43.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6740 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 5 23:31:43.793: INFO: stderr: "" Nov 5 23:31:43.793: INFO: stdout: "update-demo-nautilus-kxqfp " Nov 5 23:31:43.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6740 get pods update-demo-nautilus-kxqfp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 5 23:31:43.962: INFO: stderr: "" Nov 5 23:31:43.962: INFO: stdout: "true" Nov 5 23:31:43.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6740 get pods update-demo-nautilus-kxqfp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Nov 5 23:31:44.144: INFO: stderr: "" Nov 5 23:31:44.144: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Nov 5 23:31:44.144: INFO: validating pod update-demo-nautilus-kxqfp Nov 5 23:31:44.148: INFO: got data: { "image": "nautilus.jpg" } Nov 5 23:31:44.148: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Nov 5 23:31:44.148: INFO: update-demo-nautilus-kxqfp is verified up and running STEP: scaling up the replication controller Nov 5 23:31:44.157: INFO: scanned /root for discovery docs: Nov 5 23:31:44.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6740 scale rc update-demo-nautilus --replicas=2 --timeout=5m' Nov 5 23:31:44.372: INFO: stderr: "" Nov 5 23:31:44.372: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Nov 5 23:31:44.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6740 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 5 23:31:44.545: INFO: stderr: "" Nov 5 23:31:44.545: INFO: stdout: "update-demo-nautilus-kxqfp update-demo-nautilus-zd4pj " Nov 5 23:31:44.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6740 get pods update-demo-nautilus-kxqfp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 5 23:31:44.721: INFO: stderr: "" Nov 5 23:31:44.721: INFO: stdout: "true" Nov 5 23:31:44.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6740 get pods update-demo-nautilus-kxqfp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Nov 5 23:31:44.883: INFO: stderr: "" Nov 5 23:31:44.883: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Nov 5 23:31:44.883: INFO: validating pod update-demo-nautilus-kxqfp Nov 5 23:31:44.886: INFO: got data: { "image": "nautilus.jpg" } Nov 5 23:31:44.886: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Nov 5 23:31:44.886: INFO: update-demo-nautilus-kxqfp is verified up and running Nov 5 23:31:44.887: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6740 get pods update-demo-nautilus-zd4pj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 5 23:31:45.067: INFO: stderr: "" Nov 5 23:31:45.067: INFO: stdout: "" Nov 5 23:31:45.067: INFO: update-demo-nautilus-zd4pj is created but not running Nov 5 23:31:50.069: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6740 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 5 23:31:50.228: INFO: stderr: "" Nov 5 23:31:50.228: INFO: stdout: "update-demo-nautilus-kxqfp update-demo-nautilus-zd4pj " Nov 5 23:31:50.228: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6740 get pods update-demo-nautilus-kxqfp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 5 23:31:50.398: INFO: stderr: "" Nov 5 23:31:50.398: INFO: stdout: "true" Nov 5 23:31:50.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6740 get pods update-demo-nautilus-kxqfp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Nov 5 23:31:50.547: INFO: stderr: "" Nov 5 23:31:50.547: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Nov 5 23:31:50.547: INFO: validating pod update-demo-nautilus-kxqfp Nov 5 23:31:50.550: INFO: got data: { "image": "nautilus.jpg" } Nov 5 23:31:50.550: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Nov 5 23:31:50.550: INFO: update-demo-nautilus-kxqfp is verified up and running Nov 5 23:31:50.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6740 get pods update-demo-nautilus-zd4pj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 5 23:31:50.732: INFO: stderr: "" Nov 5 23:31:50.732: INFO: stdout: "" Nov 5 23:31:50.732: INFO: update-demo-nautilus-zd4pj is created but not running Nov 5 23:31:55.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6740 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 5 23:31:55.917: INFO: stderr: "" Nov 5 23:31:55.917: INFO: stdout: "update-demo-nautilus-kxqfp update-demo-nautilus-zd4pj " Nov 5 23:31:55.917: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6740 get pods update-demo-nautilus-kxqfp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 5 23:31:56.087: INFO: stderr: "" Nov 5 23:31:56.087: INFO: stdout: "true" Nov 5 23:31:56.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6740 get pods update-demo-nautilus-kxqfp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Nov 5 23:31:56.262: INFO: stderr: "" Nov 5 23:31:56.262: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Nov 5 23:31:56.262: INFO: validating pod update-demo-nautilus-kxqfp Nov 5 23:31:56.265: INFO: got data: { "image": "nautilus.jpg" } Nov 5 23:31:56.265: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Nov 5 23:31:56.265: INFO: update-demo-nautilus-kxqfp is verified up and running Nov 5 23:31:56.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6740 get pods update-demo-nautilus-zd4pj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 5 23:31:56.427: INFO: stderr: "" Nov 5 23:31:56.427: INFO: stdout: "true" Nov 5 23:31:56.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6740 get pods update-demo-nautilus-zd4pj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Nov 5 23:31:56.591: INFO: stderr: "" Nov 5 23:31:56.591: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Nov 5 23:31:56.591: INFO: validating pod update-demo-nautilus-zd4pj Nov 5 23:31:56.594: INFO: got data: { "image": "nautilus.jpg" } Nov 5 23:31:56.594: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Nov 5 23:31:56.595: INFO: update-demo-nautilus-zd4pj is verified up and running STEP: using delete to clean up resources Nov 5 23:31:56.595: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6740 delete --grace-period=0 --force -f -' Nov 5 23:31:56.742: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Nov 5 23:31:56.742: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Nov 5 23:31:56.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6740 get rc,svc -l name=update-demo --no-headers' Nov 5 23:31:56.946: INFO: stderr: "No resources found in kubectl-6740 namespace.\n" Nov 5 23:31:56.946: INFO: stdout: "" Nov 5 23:31:56.946: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6740 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Nov 5 23:31:57.118: INFO: stderr: "" Nov 5 23:31:57.118: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:31:57.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6740" for this suite. • [SLOW TEST:35.922 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:291 should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":-1,"completed":30,"skipped":462,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:31:45.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating the pod Nov 5 23:31:45.487: INFO: The status of Pod labelsupdatec2ed4472-2011-4198-9bdf-5b5916731129 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:31:47.491: INFO: The status of Pod labelsupdatec2ed4472-2011-4198-9bdf-5b5916731129 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:31:49.492: INFO: The status of Pod labelsupdatec2ed4472-2011-4198-9bdf-5b5916731129 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:31:51.492: INFO: The status of Pod labelsupdatec2ed4472-2011-4198-9bdf-5b5916731129 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:31:53.491: INFO: The status of Pod labelsupdatec2ed4472-2011-4198-9bdf-5b5916731129 is Running (Ready = true) Nov 5 23:31:54.008: INFO: Successfully updated pod "labelsupdatec2ed4472-2011-4198-9bdf-5b5916731129" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:31:58.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4916" for this suite. • [SLOW TEST:12.592 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":180,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:31:56.735: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap configmap-2363/configmap-test-5168e979-62d2-4689-869e-0eaf68089a74 STEP: Creating a pod to test consume configMaps Nov 5 23:31:56.770: INFO: Waiting up to 5m0s for pod "pod-configmaps-781704fa-086b-4cb5-90b0-378f4b26efd7" in namespace "configmap-2363" to be "Succeeded or Failed" Nov 5 23:31:56.773: INFO: Pod "pod-configmaps-781704fa-086b-4cb5-90b0-378f4b26efd7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.259617ms Nov 5 23:31:58.777: INFO: Pod "pod-configmaps-781704fa-086b-4cb5-90b0-378f4b26efd7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007029121s Nov 5 23:32:00.783: INFO: Pod "pod-configmaps-781704fa-086b-4cb5-90b0-378f4b26efd7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012950069s STEP: Saw pod success Nov 5 23:32:00.783: INFO: Pod "pod-configmaps-781704fa-086b-4cb5-90b0-378f4b26efd7" satisfied condition "Succeeded or Failed" Nov 5 23:32:00.785: INFO: Trying to get logs from node node1 pod pod-configmaps-781704fa-086b-4cb5-90b0-378f4b26efd7 container env-test: STEP: delete the pod Nov 5 23:32:00.798: INFO: Waiting for pod pod-configmaps-781704fa-086b-4cb5-90b0-378f4b26efd7 to disappear Nov 5 23:32:00.800: INFO: Pod pod-configmaps-781704fa-086b-4cb5-90b0-378f4b26efd7 no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:32:00.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2363" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":-1,"completed":38,"skipped":732,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:31:27.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod liveness-0ae9367a-4208-4ca5-9bf0-0550d4d46b29 in namespace container-probe-4901 Nov 5 23:31:37.810: INFO: Started pod liveness-0ae9367a-4208-4ca5-9bf0-0550d4d46b29 in namespace container-probe-4901 STEP: checking the pod's current state and verifying that restartCount is present Nov 5 23:31:37.813: INFO: Initial restart count of pod liveness-0ae9367a-4208-4ca5-9bf0-0550d4d46b29 is 0 Nov 5 23:32:01.862: INFO: Restart count of pod container-probe-4901/liveness-0ae9367a-4208-4ca5-9bf0-0550d4d46b29 is now 1 (24.049221112s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:32:01.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4901" for this suite. • [SLOW TEST:34.116 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":652,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:31:58.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Nov 5 23:31:58.124: INFO: Waiting up to 5m0s for pod "security-context-b5d485e4-7a2c-4d80-86fe-782ed9d15f39" in namespace "security-context-4284" to be "Succeeded or Failed" Nov 5 23:31:58.129: INFO: Pod "security-context-b5d485e4-7a2c-4d80-86fe-782ed9d15f39": Phase="Pending", Reason="", readiness=false. Elapsed: 4.614205ms Nov 5 23:32:00.132: INFO: Pod "security-context-b5d485e4-7a2c-4d80-86fe-782ed9d15f39": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0076565s Nov 5 23:32:02.135: INFO: Pod "security-context-b5d485e4-7a2c-4d80-86fe-782ed9d15f39": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011192272s STEP: Saw pod success Nov 5 23:32:02.135: INFO: Pod "security-context-b5d485e4-7a2c-4d80-86fe-782ed9d15f39" satisfied condition "Succeeded or Failed" Nov 5 23:32:02.139: INFO: Trying to get logs from node node2 pod security-context-b5d485e4-7a2c-4d80-86fe-782ed9d15f39 container test-container: STEP: delete the pod Nov 5 23:32:02.164: INFO: Waiting for pod security-context-b5d485e4-7a2c-4d80-86fe-782ed9d15f39 to disappear Nov 5 23:32:02.166: INFO: Pod security-context-b5d485e4-7a2c-4d80-86fe-782ed9d15f39 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:32:02.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-4284" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":17,"skipped":201,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:32:02.194: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename server-version STEP: Waiting for a default service account to be provisioned in namespace [It] should find the server version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Request ServerVersion STEP: Confirm major version Nov 5 23:32:02.216: INFO: Major version: 1 STEP: Confirm minor version Nov 5 23:32:02.216: INFO: cleanMinorVersion: 21 Nov 5 23:32:02.216: INFO: Minor version: 21 [AfterEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:32:02.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "server-version-2655" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":-1,"completed":18,"skipped":207,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:31:38.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-projected-dsds STEP: Creating a pod to test atomic-volume-subpath Nov 5 23:31:38.491: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-dsds" in namespace "subpath-9284" to be "Succeeded or Failed" Nov 5 23:31:38.499: INFO: Pod "pod-subpath-test-projected-dsds": Phase="Pending", Reason="", readiness=false. Elapsed: 7.825205ms Nov 5 23:31:40.505: INFO: Pod "pod-subpath-test-projected-dsds": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01450088s Nov 5 23:31:42.512: INFO: Pod "pod-subpath-test-projected-dsds": Phase="Running", Reason="", readiness=true. Elapsed: 4.021312764s Nov 5 23:31:44.516: INFO: Pod "pod-subpath-test-projected-dsds": Phase="Running", Reason="", readiness=true. Elapsed: 6.025207447s Nov 5 23:31:46.523: INFO: Pod "pod-subpath-test-projected-dsds": Phase="Running", Reason="", readiness=true. Elapsed: 8.032351626s Nov 5 23:31:48.528: INFO: Pod "pod-subpath-test-projected-dsds": Phase="Running", Reason="", readiness=true. Elapsed: 10.036877586s Nov 5 23:31:50.533: INFO: Pod "pod-subpath-test-projected-dsds": Phase="Running", Reason="", readiness=true. Elapsed: 12.042615255s Nov 5 23:31:52.542: INFO: Pod "pod-subpath-test-projected-dsds": Phase="Running", Reason="", readiness=true. Elapsed: 14.051197679s Nov 5 23:31:54.550: INFO: Pod "pod-subpath-test-projected-dsds": Phase="Running", Reason="", readiness=true. Elapsed: 16.059521177s Nov 5 23:31:56.554: INFO: Pod "pod-subpath-test-projected-dsds": Phase="Running", Reason="", readiness=true. Elapsed: 18.062961824s Nov 5 23:31:58.558: INFO: Pod "pod-subpath-test-projected-dsds": Phase="Running", Reason="", readiness=true. Elapsed: 20.06676297s Nov 5 23:32:00.562: INFO: Pod "pod-subpath-test-projected-dsds": Phase="Running", Reason="", readiness=true. Elapsed: 22.071379465s Nov 5 23:32:02.569: INFO: Pod "pod-subpath-test-projected-dsds": Phase="Running", Reason="", readiness=true. Elapsed: 24.077725105s Nov 5 23:32:04.577: INFO: Pod "pod-subpath-test-projected-dsds": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.085871577s STEP: Saw pod success Nov 5 23:32:04.577: INFO: Pod "pod-subpath-test-projected-dsds" satisfied condition "Succeeded or Failed" Nov 5 23:32:04.579: INFO: Trying to get logs from node node2 pod pod-subpath-test-projected-dsds container test-container-subpath-projected-dsds: STEP: delete the pod Nov 5 23:32:04.603: INFO: Waiting for pod pod-subpath-test-projected-dsds to disappear Nov 5 23:32:04.605: INFO: Pod pod-subpath-test-projected-dsds no longer exists STEP: Deleting pod pod-subpath-test-projected-dsds Nov 5 23:32:04.605: INFO: Deleting pod "pod-subpath-test-projected-dsds" in namespace "subpath-9284" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:32:04.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9284" for this suite. • [SLOW TEST:26.165 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":-1,"completed":28,"skipped":442,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":-1,"completed":40,"skipped":735,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:31:53.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:32:05.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-8481" for this suite. • [SLOW TEST:12.039 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":-1,"completed":41,"skipped":735,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:32:06.013: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should support proxy with --port 0 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: starting the proxy server Nov 5 23:32:06.034: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-536 proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:32:06.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-536" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":-1,"completed":42,"skipped":746,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:32:01.902: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-2381.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-2381.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2381.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-2381.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-2381.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2381.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Nov 5 23:32:09.968: INFO: DNS probes using dns-2381/dns-test-f037e658-0472-461c-95a9-39a0ee623f47 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:32:09.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2381" for this suite. • [SLOW TEST:8.085 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":-1,"completed":30,"skipped":657,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:32:02.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0666 on node default medium Nov 5 23:32:02.301: INFO: Waiting up to 5m0s for pod "pod-91b47220-a67c-4803-8370-ea9cbe7a5870" in namespace "emptydir-7151" to be "Succeeded or Failed" Nov 5 23:32:02.304: INFO: Pod "pod-91b47220-a67c-4803-8370-ea9cbe7a5870": Phase="Pending", Reason="", readiness=false. Elapsed: 2.686474ms Nov 5 23:32:04.309: INFO: Pod "pod-91b47220-a67c-4803-8370-ea9cbe7a5870": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007344057s Nov 5 23:32:06.313: INFO: Pod "pod-91b47220-a67c-4803-8370-ea9cbe7a5870": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011151154s Nov 5 23:32:08.315: INFO: Pod "pod-91b47220-a67c-4803-8370-ea9cbe7a5870": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013589113s Nov 5 23:32:10.318: INFO: Pod "pod-91b47220-a67c-4803-8370-ea9cbe7a5870": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.016145956s STEP: Saw pod success Nov 5 23:32:10.318: INFO: Pod "pod-91b47220-a67c-4803-8370-ea9cbe7a5870" satisfied condition "Succeeded or Failed" Nov 5 23:32:10.320: INFO: Trying to get logs from node node2 pod pod-91b47220-a67c-4803-8370-ea9cbe7a5870 container test-container: STEP: delete the pod Nov 5 23:32:10.392: INFO: Waiting for pod pod-91b47220-a67c-4803-8370-ea9cbe7a5870 to disappear Nov 5 23:32:10.395: INFO: Pod pod-91b47220-a67c-4803-8370-ea9cbe7a5870 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:32:10.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7151" for this suite. • [SLOW TEST:8.135 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":227,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:32:00.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name s-test-opt-del-c60e8155-8168-4186-b5d7-5e4d2ff9b3e5 STEP: Creating secret with name s-test-opt-upd-807c77b4-2e60-4383-8cc2-03ba7dce00b6 STEP: Creating the pod Nov 5 23:32:00.905: INFO: The status of Pod pod-projected-secrets-9891120e-2afd-4e7e-a747-5dd67ce7a8c1 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:32:02.908: INFO: The status of Pod pod-projected-secrets-9891120e-2afd-4e7e-a747-5dd67ce7a8c1 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:32:04.908: INFO: The status of Pod pod-projected-secrets-9891120e-2afd-4e7e-a747-5dd67ce7a8c1 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:32:06.908: INFO: The status of Pod pod-projected-secrets-9891120e-2afd-4e7e-a747-5dd67ce7a8c1 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:32:08.908: INFO: The status of Pod pod-projected-secrets-9891120e-2afd-4e7e-a747-5dd67ce7a8c1 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:32:10.910: INFO: The status of Pod pod-projected-secrets-9891120e-2afd-4e7e-a747-5dd67ce7a8c1 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:32:12.909: INFO: The status of Pod pod-projected-secrets-9891120e-2afd-4e7e-a747-5dd67ce7a8c1 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:32:14.909: INFO: The status of Pod pod-projected-secrets-9891120e-2afd-4e7e-a747-5dd67ce7a8c1 is Running (Ready = true) STEP: Deleting secret s-test-opt-del-c60e8155-8168-4186-b5d7-5e4d2ff9b3e5 STEP: Updating secret s-test-opt-upd-807c77b4-2e60-4383-8cc2-03ba7dce00b6 STEP: Creating secret with name s-test-opt-create-49d5dd57-dca2-46d5-98d1-766e6765f4c8 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:32:18.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1163" for this suite. • [SLOW TEST:18.141 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":39,"skipped":753,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:32:04.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Nov 5 23:32:05.048: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Nov 5 23:32:07.058: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751925, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751925, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751925, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751925, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 5 23:32:09.061: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751925, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751925, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751925, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751925, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 5 23:32:11.061: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751925, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751925, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751925, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751925, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 5 23:32:13.061: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751925, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751925, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751925, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751925, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 5 23:32:16.076: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Nov 5 23:32:20.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=webhook-7000 attach --namespace=webhook-7000 to-be-attached-pod -i -c=container1' Nov 5 23:32:20.272: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:32:20.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7000" for this suite. STEP: Destroying namespace "webhook-7000-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.631 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":-1,"completed":29,"skipped":476,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:32:10.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 5 23:32:10.049: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Nov 5 23:32:10.054: INFO: Pod name sample-pod: Found 0 pods out of 1 Nov 5 23:32:15.059: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Nov 5 23:32:17.065: INFO: Creating deployment "test-rolling-update-deployment" Nov 5 23:32:17.071: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Nov 5 23:32:17.075: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Nov 5 23:32:19.081: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Nov 5 23:32:19.084: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751937, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751937, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751937, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751937, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-585b757574\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 5 23:32:21.090: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Nov 5 23:32:21.098: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-1688 937aee1b-fb7b-4b1f-b55b-541888941ff8 53451 1 2021-11-05 23:32:17 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2021-11-05 23:32:17 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-11-05 23:32:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004c69738 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-11-05 23:32:17 +0000 UTC,LastTransitionTime:2021-11-05 23:32:17 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-585b757574" has successfully progressed.,LastUpdateTime:2021-11-05 23:32:20 +0000 UTC,LastTransitionTime:2021-11-05 23:32:17 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Nov 5 23:32:21.101: INFO: New ReplicaSet "test-rolling-update-deployment-585b757574" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-585b757574 deployment-1688 9ad3a358-0e55-4274-b58b-90d95e1b84f1 53432 1 2021-11-05 23:32:17 +0000 UTC map[name:sample-pod pod-template-hash:585b757574] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 937aee1b-fb7b-4b1f-b55b-541888941ff8 0xc004af60c7 0xc004af60c8}] [] [{kube-controller-manager Update apps/v1 2021-11-05 23:32:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"937aee1b-fb7b-4b1f-b55b-541888941ff8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 585b757574,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:585b757574] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004af6158 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Nov 5 23:32:21.101: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Nov 5 23:32:21.102: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-1688 160e3cf7-9345-415c-9418-2ce1f77db25b 53448 2 2021-11-05 23:32:10 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 937aee1b-fb7b-4b1f-b55b-541888941ff8 0xc004c69f57 0xc004c69f58}] [] [{e2e.test Update apps/v1 2021-11-05 23:32:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-11-05 23:32:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"937aee1b-fb7b-4b1f-b55b-541888941ff8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004af6058 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Nov 5 23:32:21.105: INFO: Pod "test-rolling-update-deployment-585b757574-rhwrq" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-585b757574-rhwrq test-rolling-update-deployment-585b757574- deployment-1688 a95107a1-45e2-4051-a996-83d939cff8cd 53429 0 2021-11-05 23:32:17 +0000 UTC map[name:sample-pod pod-template-hash:585b757574] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.161" ], "mac": "46:2a:d8:d5:9a:a7", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.161" ], "mac": "46:2a:d8:d5:9a:a7", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-rolling-update-deployment-585b757574 9ad3a358-0e55-4274-b58b-90d95e1b84f1 0xc004af657f 0xc004af6590}] [] [{kube-controller-manager Update v1 2021-11-05 23:32:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9ad3a358-0e55-4274-b58b-90d95e1b84f1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-11-05 23:32:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-11-05 23:32:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.161\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-fjd6g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fjd6g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:32:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:32:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:32:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-05 23:32:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.161,StartTime:2021-11-05 23:32:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-11-05 23:32:19 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:docker://1b0b3fb4f9ca399d31c0063d1d8509b56475b9dcf1e84e17c108a34b4344a194,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.161,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:32:21.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1688" for this suite. • [SLOW TEST:11.087 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":31,"skipped":676,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:32:21.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod with one valid and two invalid sysctls [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:32:21.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-3076" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":32,"skipped":717,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:32:10.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 5 23:32:10.442: INFO: Creating ReplicaSet my-hostname-basic-e8fea30c-1055-4e08-a138-5b17bec360cd Nov 5 23:32:10.449: INFO: Pod name my-hostname-basic-e8fea30c-1055-4e08-a138-5b17bec360cd: Found 0 pods out of 1 Nov 5 23:32:15.452: INFO: Pod name my-hostname-basic-e8fea30c-1055-4e08-a138-5b17bec360cd: Found 1 pods out of 1 Nov 5 23:32:15.452: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-e8fea30c-1055-4e08-a138-5b17bec360cd" is running Nov 5 23:32:17.460: INFO: Pod "my-hostname-basic-e8fea30c-1055-4e08-a138-5b17bec360cd-dxdtd" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-11-05 23:32:10 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-11-05 23:32:10 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-e8fea30c-1055-4e08-a138-5b17bec360cd]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-11-05 23:32:10 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-e8fea30c-1055-4e08-a138-5b17bec360cd]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-11-05 23:32:10 +0000 UTC Reason: Message:}]) Nov 5 23:32:17.461: INFO: Trying to dial the pod Nov 5 23:32:22.471: INFO: Controller my-hostname-basic-e8fea30c-1055-4e08-a138-5b17bec360cd: Got expected result from replica 1 [my-hostname-basic-e8fea30c-1055-4e08-a138-5b17bec360cd-dxdtd]: "my-hostname-basic-e8fea30c-1055-4e08-a138-5b17bec360cd-dxdtd", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:32:22.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-9756" for this suite. • [SLOW TEST:12.059 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":-1,"completed":20,"skipped":235,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:32:19.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars Nov 5 23:32:19.119: INFO: Waiting up to 5m0s for pod "downward-api-45495042-b865-4bd0-a17a-4bd2e091c456" in namespace "downward-api-8689" to be "Succeeded or Failed" Nov 5 23:32:19.126: INFO: Pod "downward-api-45495042-b865-4bd0-a17a-4bd2e091c456": Phase="Pending", Reason="", readiness=false. Elapsed: 7.070771ms Nov 5 23:32:21.129: INFO: Pod "downward-api-45495042-b865-4bd0-a17a-4bd2e091c456": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010036001s Nov 5 23:32:23.133: INFO: Pod "downward-api-45495042-b865-4bd0-a17a-4bd2e091c456": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013715163s STEP: Saw pod success Nov 5 23:32:23.133: INFO: Pod "downward-api-45495042-b865-4bd0-a17a-4bd2e091c456" satisfied condition "Succeeded or Failed" Nov 5 23:32:23.137: INFO: Trying to get logs from node node1 pod downward-api-45495042-b865-4bd0-a17a-4bd2e091c456 container dapi-container: STEP: delete the pod Nov 5 23:32:23.149: INFO: Waiting for pod downward-api-45495042-b865-4bd0-a17a-4bd2e091c456 to disappear Nov 5 23:32:23.150: INFO: Pod downward-api-45495042-b865-4bd0-a17a-4bd2e091c456 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:32:23.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8689" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":-1,"completed":40,"skipped":803,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} S ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:32:20.333: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-map-c431f0ac-9442-4bfd-a440-a90f922499c2 STEP: Creating a pod to test consume configMaps Nov 5 23:32:20.386: INFO: Waiting up to 5m0s for pod "pod-configmaps-15b7d786-2336-4a90-b391-075a29c880d6" in namespace "configmap-4644" to be "Succeeded or Failed" Nov 5 23:32:20.389: INFO: Pod "pod-configmaps-15b7d786-2336-4a90-b391-075a29c880d6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.070099ms Nov 5 23:32:22.392: INFO: Pod "pod-configmaps-15b7d786-2336-4a90-b391-075a29c880d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006160041s Nov 5 23:32:24.397: INFO: Pod "pod-configmaps-15b7d786-2336-4a90-b391-075a29c880d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010716026s STEP: Saw pod success Nov 5 23:32:24.397: INFO: Pod "pod-configmaps-15b7d786-2336-4a90-b391-075a29c880d6" satisfied condition "Succeeded or Failed" Nov 5 23:32:24.399: INFO: Trying to get logs from node node2 pod pod-configmaps-15b7d786-2336-4a90-b391-075a29c880d6 container agnhost-container: STEP: delete the pod Nov 5 23:32:24.412: INFO: Waiting for pod pod-configmaps-15b7d786-2336-4a90-b391-075a29c880d6 to disappear Nov 5 23:32:24.414: INFO: Pod pod-configmaps-15b7d786-2336-4a90-b391-075a29c880d6 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:32:24.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4644" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":484,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:32:21.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Nov 5 23:32:22.243: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Nov 5 23:32:24.254: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751942, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751942, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751942, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771751942, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 5 23:32:27.263: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:32:27.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-369" for this suite. STEP: Destroying namespace "webhook-369-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.059 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":-1,"completed":33,"skipped":741,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:32:22.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [BeforeEach] Listing PodDisruptionBudgets for all namespaces /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:32:22.519: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption-2 STEP: Waiting for a default service account to be provisioned in namespace [It] should list and delete a collection of PodDisruptionBudgets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Waiting for the pdb to be processed STEP: Waiting for the pdb to be processed STEP: Waiting for the pdb to be processed STEP: listing a collection of PDBs across all namespaces STEP: listing a collection of PDBs in namespace disruption-721 STEP: deleting a collection of PDBs STEP: Waiting for the PDB collection to be deleted [AfterEach] Listing PodDisruptionBudgets for all namespaces /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:32:28.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-2-1862" for this suite. [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:32:28.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-721" for this suite. • [SLOW TEST:6.100 seconds] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Listing PodDisruptionBudgets for all namespaces /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:75 should list and delete a collection of PodDisruptionBudgets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]","total":-1,"completed":21,"skipped":243,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:32:28.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars Nov 5 23:32:28.642: INFO: Waiting up to 5m0s for pod "downward-api-d27dbc2f-b2ac-42c6-9af2-aa675a77128a" in namespace "downward-api-1004" to be "Succeeded or Failed" Nov 5 23:32:28.648: INFO: Pod "downward-api-d27dbc2f-b2ac-42c6-9af2-aa675a77128a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.451468ms Nov 5 23:32:30.652: INFO: Pod "downward-api-d27dbc2f-b2ac-42c6-9af2-aa675a77128a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010621655s Nov 5 23:32:32.661: INFO: Pod "downward-api-d27dbc2f-b2ac-42c6-9af2-aa675a77128a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019007004s STEP: Saw pod success Nov 5 23:32:32.661: INFO: Pod "downward-api-d27dbc2f-b2ac-42c6-9af2-aa675a77128a" satisfied condition "Succeeded or Failed" Nov 5 23:32:32.663: INFO: Trying to get logs from node node2 pod downward-api-d27dbc2f-b2ac-42c6-9af2-aa675a77128a container dapi-container: STEP: delete the pod Nov 5 23:32:32.674: INFO: Waiting for pod downward-api-d27dbc2f-b2ac-42c6-9af2-aa675a77128a to disappear Nov 5 23:32:32.676: INFO: Pod downward-api-d27dbc2f-b2ac-42c6-9af2-aa675a77128a no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:32:32.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1004" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":249,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:32:24.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: submitting the pod to kubernetes Nov 5 23:32:24.486: INFO: The status of Pod pod-update-activedeadlineseconds-bd9dd278-39b0-4b82-a5ff-ed2b8cc06025 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:32:26.490: INFO: The status of Pod pod-update-activedeadlineseconds-bd9dd278-39b0-4b82-a5ff-ed2b8cc06025 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:32:28.489: INFO: The status of Pod pod-update-activedeadlineseconds-bd9dd278-39b0-4b82-a5ff-ed2b8cc06025 is Running (Ready = true) STEP: verifying the pod is in kubernetes STEP: updating the pod Nov 5 23:32:29.002: INFO: Successfully updated pod "pod-update-activedeadlineseconds-bd9dd278-39b0-4b82-a5ff-ed2b8cc06025" Nov 5 23:32:29.002: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-bd9dd278-39b0-4b82-a5ff-ed2b8cc06025" in namespace "pods-8453" to be "terminated due to deadline exceeded" Nov 5 23:32:29.004: INFO: Pod "pod-update-activedeadlineseconds-bd9dd278-39b0-4b82-a5ff-ed2b8cc06025": Phase="Running", Reason="", readiness=true. Elapsed: 2.096422ms Nov 5 23:32:31.011: INFO: Pod "pod-update-activedeadlineseconds-bd9dd278-39b0-4b82-a5ff-ed2b8cc06025": Phase="Running", Reason="", readiness=true. Elapsed: 2.008892814s Nov 5 23:32:33.014: INFO: Pod "pod-update-activedeadlineseconds-bd9dd278-39b0-4b82-a5ff-ed2b8cc06025": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.01222697s Nov 5 23:32:33.014: INFO: Pod "pod-update-activedeadlineseconds-bd9dd278-39b0-4b82-a5ff-ed2b8cc06025" satisfied condition "terminated due to deadline exceeded" [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:32:33.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8453" for this suite. • [SLOW TEST:8.573 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":497,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:32:32.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test env composition Nov 5 23:32:32.744: INFO: Waiting up to 5m0s for pod "var-expansion-af173df4-0597-4d7c-83cc-bb58c63faaec" in namespace "var-expansion-7901" to be "Succeeded or Failed" Nov 5 23:32:32.746: INFO: Pod "var-expansion-af173df4-0597-4d7c-83cc-bb58c63faaec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.456628ms Nov 5 23:32:34.749: INFO: Pod "var-expansion-af173df4-0597-4d7c-83cc-bb58c63faaec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005831932s Nov 5 23:32:36.756: INFO: Pod "var-expansion-af173df4-0597-4d7c-83cc-bb58c63faaec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012566811s STEP: Saw pod success Nov 5 23:32:36.756: INFO: Pod "var-expansion-af173df4-0597-4d7c-83cc-bb58c63faaec" satisfied condition "Succeeded or Failed" Nov 5 23:32:36.758: INFO: Trying to get logs from node node1 pod var-expansion-af173df4-0597-4d7c-83cc-bb58c63faaec container dapi-container: STEP: delete the pod Nov 5 23:32:36.769: INFO: Waiting for pod var-expansion-af173df4-0597-4d7c-83cc-bb58c63faaec to disappear Nov 5 23:32:36.771: INFO: Pod var-expansion-af173df4-0597-4d7c-83cc-bb58c63faaec no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:32:36.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7901" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":264,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:32:23.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 5 23:32:27.207: INFO: Deleting pod "var-expansion-a9d235df-5589-4a90-8463-b996072335bb" in namespace "var-expansion-1954" Nov 5 23:32:27.213: INFO: Wait up to 5m0s for pod "var-expansion-a9d235df-5589-4a90-8463-b996072335bb" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:32:39.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1954" for this suite. • [SLOW TEST:16.063 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]","total":-1,"completed":41,"skipped":804,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSS ------------------------------ Nov 5 23:32:39.237: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:32:33.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should test the lifecycle of a ReplicationController [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a ReplicationController STEP: waiting for RC to be added STEP: waiting for available Replicas STEP: patching ReplicationController STEP: waiting for RC to be modified STEP: patching ReplicationController status STEP: waiting for RC to be modified STEP: waiting for available Replicas STEP: fetching ReplicationController status STEP: patching ReplicationController scale STEP: waiting for RC to be modified STEP: waiting for ReplicationController's scale to be the max amount STEP: fetching ReplicationController; ensuring that it's patched STEP: updating ReplicationController status STEP: waiting for RC to be modified STEP: listing all ReplicationControllers STEP: checking that ReplicationController has expected values STEP: deleting ReplicationControllers by collection STEP: waiting for ReplicationController to have a DELETED watchEvent [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:32:40.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2649" for this suite. • [SLOW TEST:7.635 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should test the lifecycle of a ReplicationController [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":-1,"completed":32,"skipped":508,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} Nov 5 23:32:40.681: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:28:35.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod liveness-58ae240d-5759-470f-be6e-c54592abc01a in namespace container-probe-8285 Nov 5 23:28:41.186: INFO: Started pod liveness-58ae240d-5759-470f-be6e-c54592abc01a in namespace container-probe-8285 STEP: checking the pod's current state and verifying that restartCount is present Nov 5 23:28:41.189: INFO: Initial restart count of pod liveness-58ae240d-5759-470f-be6e-c54592abc01a is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:32:41.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8285" for this suite. • [SLOW TEST:246.618 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:31:33.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W1105 23:31:43.627312 37 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 5 23:32:45.643: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:32:45.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3352" for this suite. • [SLOW TEST:72.075 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":-1,"completed":21,"skipped":406,"failed":0} Nov 5 23:32:45.653: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:32:27.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: set up a multi version CRD Nov 5 23:32:27.371: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:32:49.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-197" for this suite. • [SLOW TEST:22.230 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":-1,"completed":34,"skipped":746,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} Nov 5 23:32:49.579: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:32:36.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:32:49.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-47" for this suite. • [SLOW TEST:13.097 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":-1,"completed":24,"skipped":283,"failed":0} Nov 5 23:32:49.911: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:31:57.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name cm-test-opt-del-eb64a5d6-97a7-4f12-81b7-c44f45d92d58 STEP: Creating configMap with name cm-test-opt-upd-4821d8de-a374-46cc-ad69-18341e7d15d5 STEP: Creating the pod Nov 5 23:31:57.219: INFO: The status of Pod pod-projected-configmaps-e1e62ae7-774e-43dd-ae3e-9e3f89d5df70 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:31:59.222: INFO: The status of Pod pod-projected-configmaps-e1e62ae7-774e-43dd-ae3e-9e3f89d5df70 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:32:01.225: INFO: The status of Pod pod-projected-configmaps-e1e62ae7-774e-43dd-ae3e-9e3f89d5df70 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:32:03.222: INFO: The status of Pod pod-projected-configmaps-e1e62ae7-774e-43dd-ae3e-9e3f89d5df70 is Pending, waiting for it to be Running (with Ready = true) Nov 5 23:32:05.222: INFO: The status of Pod pod-projected-configmaps-e1e62ae7-774e-43dd-ae3e-9e3f89d5df70 is Running (Ready = true) STEP: Deleting configmap cm-test-opt-del-eb64a5d6-97a7-4f12-81b7-c44f45d92d58 STEP: Updating configmap cm-test-opt-upd-4821d8de-a374-46cc-ad69-18341e7d15d5 STEP: Creating configMap with name cm-test-opt-create-6861a242-806e-426d-ae2d-3c900a463a61 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:33:11.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3697" for this suite. • [SLOW TEST:74.774 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":479,"failed":0} Nov 5 23:33:11.945: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:32:06.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: Gathering metrics W1105 23:32:12.206003 34 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 5 23:33:14.223: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:33:14.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8407" for this suite. • [SLOW TEST:68.084 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":-1,"completed":43,"skipped":748,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} Nov 5 23:33:14.235: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:31:43.512: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W1105 23:32:23.567748 26 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 5 23:33:25.586: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. Nov 5 23:33:25.586: INFO: Deleting pod "simpletest.rc-7sgpr" in namespace "gc-1600" Nov 5 23:33:25.594: INFO: Deleting pod "simpletest.rc-8dwmd" in namespace "gc-1600" Nov 5 23:33:25.600: INFO: Deleting pod "simpletest.rc-9g4h6" in namespace "gc-1600" Nov 5 23:33:25.606: INFO: Deleting pod "simpletest.rc-dzdzs" in namespace "gc-1600" Nov 5 23:33:25.611: INFO: Deleting pod "simpletest.rc-ghpmg" in namespace "gc-1600" Nov 5 23:33:25.617: INFO: Deleting pod "simpletest.rc-h8j67" in namespace "gc-1600" Nov 5 23:33:25.622: INFO: Deleting pod "simpletest.rc-lrlx6" in namespace "gc-1600" Nov 5 23:33:25.628: INFO: Deleting pod "simpletest.rc-wwphf" in namespace "gc-1600" Nov 5 23:33:25.633: INFO: Deleting pod "simpletest.rc-z2nkn" in namespace "gc-1600" Nov 5 23:33:25.640: INFO: Deleting pod "simpletest.rc-zcq5n" in namespace "gc-1600" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:33:25.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1600" for this suite. • [SLOW TEST:102.141 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":-1,"completed":42,"skipped":648,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} Nov 5 23:33:25.655: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:31:12.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod busybox-140a1dfa-b0f9-4c7b-afa3-c973de449f6a in namespace container-probe-2973 Nov 5 23:31:18.902: INFO: Started pod busybox-140a1dfa-b0f9-4c7b-afa3-c973de449f6a in namespace container-probe-2973 STEP: checking the pod's current state and verifying that restartCount is present Nov 5 23:31:18.905: INFO: Initial restart count of pod busybox-140a1dfa-b0f9-4c7b-afa3-c973de449f6a is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:35:19.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2973" for this suite. • [SLOW TEST:246.613 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":406,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} Nov 5 23:35:19.474: INFO: Running AfterSuite actions on all nodes {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":416,"failed":0} Nov 5 23:32:41.764: INFO: Running AfterSuite actions on all nodes Nov 5 23:35:19.507: INFO: Running AfterSuite actions on node 1 Nov 5 23:35:19.507: INFO: Skipping dumping logs from cluster Summarizing 6 Failures: [Fail] [sig-network] Services [It] should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1169 [Fail] [sig-network] Services [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1351 [Fail] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] [It] Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 [Fail] [sig-network] Services [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2572 [Fail] [sig-network] Services [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2493 [Fail] [sig-network] Services [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2572 Ran 320 of 5770 Specs in 850.004 seconds FAIL! -- 314 Passed | 6 Failed | 0 Pending | 5450 Skipped Ginkgo ran 1 suite in 14m11.581123462s Test Suite Failed