Running Suite: Kubernetes e2e suite =================================== Random Seed: 1634952497 - Will randomize all specs Will run 5770 specs Running in parallel across 10 nodes Oct 23 01:28:19.460: INFO: >>> kubeConfig: /root/.kube/config Oct 23 01:28:19.462: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Oct 23 01:28:19.489: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Oct 23 01:28:19.557: INFO: The status of Pod cmk-init-discover-node1-c599w is Succeeded, skipping waiting Oct 23 01:28:19.557: INFO: The status of Pod cmk-init-discover-node2-2btnq is Succeeded, skipping waiting Oct 23 01:28:19.557: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Oct 23 01:28:19.557: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Oct 23 01:28:19.557: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Oct 23 01:28:19.568: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Oct 23 01:28:19.568: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Oct 23 01:28:19.568: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) Oct 23 01:28:19.568: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) Oct 23 01:28:19.568: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) Oct 23 01:28:19.568: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) Oct 23 01:28:19.568: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Oct 23 01:28:19.568: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Oct 23 01:28:19.568: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Oct 23 01:28:19.568: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Oct 23 01:28:19.568: INFO: e2e test version: v1.21.5 Oct 23 01:28:19.570: INFO: kube-apiserver version: v1.21.1 Oct 23 01:28:19.570: INFO: >>> kubeConfig: /root/.kube/config Oct 23 01:28:19.575: INFO: Cluster IP family: ipv4 SS ------------------------------ Oct 23 01:28:19.594: INFO: >>> kubeConfig: /root/.kube/config Oct 23 01:28:19.612: INFO: Cluster IP family: ipv4 S ------------------------------ Oct 23 01:28:19.593: INFO: >>> kubeConfig: /root/.kube/config Oct 23 01:28:19.615: INFO: Cluster IP family: ipv4 Oct 23 01:28:19.596: INFO: >>> kubeConfig: /root/.kube/config Oct 23 01:28:19.617: INFO: Cluster IP family: ipv4 S ------------------------------ Oct 23 01:28:19.594: INFO: >>> kubeConfig: /root/.kube/config Oct 23 01:28:19.617: INFO: Cluster IP family: ipv4 S ------------------------------ Oct 23 01:28:19.597: INFO: >>> kubeConfig: /root/.kube/config Oct 23 01:28:19.619: INFO: Cluster IP family: ipv4 SSS ------------------------------ Oct 23 01:28:19.602: INFO: >>> kubeConfig: /root/.kube/config Oct 23 01:28:19.622: INFO: Cluster IP family: ipv4 SSSSS ------------------------------ Oct 23 01:28:19.605: INFO: >>> kubeConfig: /root/.kube/config Oct 23 01:28:19.625: INFO: Cluster IP family: ipv4 SSSSS ------------------------------ Oct 23 01:28:19.609: INFO: >>> kubeConfig: /root/.kube/config Oct 23 01:28:19.629: INFO: Cluster IP family: ipv4 SSS ------------------------------ Oct 23 01:28:19.609: INFO: >>> kubeConfig: /root/.kube/config Oct 23 01:28:19.631: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:28:19.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl W1023 01:28:19.640696 22 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 23 01:28:19.640: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 23 01:28:19.642: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if v1 is in available api versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: validating api versions Oct 23 01:28:19.645: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9901 api-versions' Oct 23 01:28:19.741: INFO: stderr: "" Oct 23 01:28:19.741: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ncrd-publish-openapi-test-common-group.example.com/v6\ncustom.metrics.k8s.io/v1beta1\ndiscovery.k8s.io/v1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nflowcontrol.apiserver.k8s.io/v1beta1\nintel.com/v1\nk8s.cni.cncf.io/v1\nmonitoring.coreos.com/v1\nmonitoring.coreos.com/v1alpha1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1\nnode.k8s.io/v1beta1\npolicy/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\ntelemetry.intel.com/v1alpha1\nv1\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:28:19.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9901" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:28:19.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename runtimeclass STEP: Waiting for a default service account to be provisioned in namespace [It] should support RuntimeClasses API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/node.k8s.io STEP: getting /apis/node.k8s.io/v1 STEP: creating STEP: watching Oct 23 01:28:19.918: INFO: starting watch STEP: getting STEP: listing STEP: patching STEP: updating Oct 23 01:28:19.933: INFO: waiting for watch events with expected annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:28:19.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "runtimeclass-546" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance]","total":-1,"completed":2,"skipped":16,"failed":0} SSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:28:19.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption W1023 01:28:19.657097 33 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 23 01:28:19.657: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 23 01:28:19.658: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should create a PodDisruptionBudget [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pdb STEP: Waiting for the pdb to be processed STEP: updating the pdb STEP: Waiting for the pdb to be processed STEP: patching the pdb STEP: Waiting for the pdb to be processed STEP: Waiting for the pdb to be deleted [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:28:21.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-4258" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]","total":-1,"completed":1,"skipped":5,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:28:19.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers W1023 01:28:19.672371 28 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 23 01:28:19.672: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 23 01:28:19.674: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test override arguments Oct 23 01:28:19.688: INFO: Waiting up to 5m0s for pod "client-containers-a5e7fb5c-35c2-42fb-be25-df6802a7fa27" in namespace "containers-6129" to be "Succeeded or Failed" Oct 23 01:28:19.691: INFO: Pod "client-containers-a5e7fb5c-35c2-42fb-be25-df6802a7fa27": Phase="Pending", Reason="", readiness=false. Elapsed: 2.604721ms Oct 23 01:28:21.694: INFO: Pod "client-containers-a5e7fb5c-35c2-42fb-be25-df6802a7fa27": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005623s Oct 23 01:28:23.698: INFO: Pod "client-containers-a5e7fb5c-35c2-42fb-be25-df6802a7fa27": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009831776s STEP: Saw pod success Oct 23 01:28:23.698: INFO: Pod "client-containers-a5e7fb5c-35c2-42fb-be25-df6802a7fa27" satisfied condition "Succeeded or Failed" Oct 23 01:28:23.701: INFO: Trying to get logs from node node1 pod client-containers-a5e7fb5c-35c2-42fb-be25-df6802a7fa27 container agnhost-container: STEP: delete the pod Oct 23 01:28:23.721: INFO: Waiting for pod client-containers-a5e7fb5c-35c2-42fb-be25-df6802a7fa27 to disappear Oct 23 01:28:23.723: INFO: Pod client-containers-a5e7fb5c-35c2-42fb-be25-df6802a7fa27 no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:28:23.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6129" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":7,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:28:19.651: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion W1023 01:28:19.674524 37 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 23 01:28:19.674: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 23 01:28:19.676: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test substitution in volume subpath Oct 23 01:28:19.692: INFO: Waiting up to 5m0s for pod "var-expansion-972db0f4-e536-4e44-abed-de5ca506ca6f" in namespace "var-expansion-8483" to be "Succeeded or Failed" Oct 23 01:28:19.694: INFO: Pod "var-expansion-972db0f4-e536-4e44-abed-de5ca506ca6f": Phase="Pending", Reason="", readiness=false. Elapsed: 1.911971ms Oct 23 01:28:21.697: INFO: Pod "var-expansion-972db0f4-e536-4e44-abed-de5ca506ca6f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004832995s Oct 23 01:28:23.699: INFO: Pod "var-expansion-972db0f4-e536-4e44-abed-de5ca506ca6f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007374797s Oct 23 01:28:25.702: INFO: Pod "var-expansion-972db0f4-e536-4e44-abed-de5ca506ca6f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.010347928s Oct 23 01:28:27.706: INFO: Pod "var-expansion-972db0f4-e536-4e44-abed-de5ca506ca6f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.014420059s Oct 23 01:28:29.710: INFO: Pod "var-expansion-972db0f4-e536-4e44-abed-de5ca506ca6f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.017840094s STEP: Saw pod success Oct 23 01:28:29.710: INFO: Pod "var-expansion-972db0f4-e536-4e44-abed-de5ca506ca6f" satisfied condition "Succeeded or Failed" Oct 23 01:28:29.712: INFO: Trying to get logs from node node2 pod var-expansion-972db0f4-e536-4e44-abed-de5ca506ca6f container dapi-container: STEP: delete the pod Oct 23 01:28:29.789: INFO: Waiting for pod var-expansion-972db0f4-e536-4e44-abed-de5ca506ca6f to disappear Oct 23 01:28:29.791: INFO: Pod var-expansion-972db0f4-e536-4e44-abed-de5ca506ca6f no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:28:29.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8483" for this suite. • [SLOW TEST:10.148 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should allow substituting values in a volume subpath [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:28:19.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected W1023 01:28:19.653014 25 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 23 01:28:19.653: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 23 01:28:19.655: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-5ca9a05d-8d31-42d2-bcdd-60ec1e176606 STEP: Creating a pod to test consume configMaps Oct 23 01:28:19.672: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a3dcffa4-5da7-4143-93e5-c36ce3c2ff5b" in namespace "projected-489" to be "Succeeded or Failed" Oct 23 01:28:19.674: INFO: Pod "pod-projected-configmaps-a3dcffa4-5da7-4143-93e5-c36ce3c2ff5b": Phase="Pending", Reason="", readiness=false. Elapsed: 1.982545ms Oct 23 01:28:21.678: INFO: Pod "pod-projected-configmaps-a3dcffa4-5da7-4143-93e5-c36ce3c2ff5b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005649049s Oct 23 01:28:23.683: INFO: Pod "pod-projected-configmaps-a3dcffa4-5da7-4143-93e5-c36ce3c2ff5b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010588325s Oct 23 01:28:25.687: INFO: Pod "pod-projected-configmaps-a3dcffa4-5da7-4143-93e5-c36ce3c2ff5b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014781508s Oct 23 01:28:27.691: INFO: Pod "pod-projected-configmaps-a3dcffa4-5da7-4143-93e5-c36ce3c2ff5b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.018668894s Oct 23 01:28:29.694: INFO: Pod "pod-projected-configmaps-a3dcffa4-5da7-4143-93e5-c36ce3c2ff5b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.022185727s STEP: Saw pod success Oct 23 01:28:29.694: INFO: Pod "pod-projected-configmaps-a3dcffa4-5da7-4143-93e5-c36ce3c2ff5b" satisfied condition "Succeeded or Failed" Oct 23 01:28:29.696: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-a3dcffa4-5da7-4143-93e5-c36ce3c2ff5b container projected-configmap-volume-test: STEP: delete the pod Oct 23 01:28:29.789: INFO: Waiting for pod pod-projected-configmaps-a3dcffa4-5da7-4143-93e5-c36ce3c2ff5b to disappear Oct 23 01:28:29.791: INFO: Pod pod-projected-configmaps-a3dcffa4-5da7-4143-93e5-c36ce3c2ff5b no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:28:29.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-489" for this suite. • [SLOW TEST:10.173 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]","total":-1,"completed":1,"skipped":8,"failed":0} S ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":5,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:28:21.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Oct 23 01:28:21.774: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ba064df8-4358-479c-87e2-860791b2cb61" in namespace "projected-4051" to be "Succeeded or Failed" Oct 23 01:28:21.776: INFO: Pod "downwardapi-volume-ba064df8-4358-479c-87e2-860791b2cb61": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057969ms Oct 23 01:28:23.779: INFO: Pod "downwardapi-volume-ba064df8-4358-479c-87e2-860791b2cb61": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005595553s Oct 23 01:28:25.783: INFO: Pod "downwardapi-volume-ba064df8-4358-479c-87e2-860791b2cb61": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009290067s Oct 23 01:28:27.787: INFO: Pod "downwardapi-volume-ba064df8-4358-479c-87e2-860791b2cb61": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013080267s Oct 23 01:28:29.790: INFO: Pod "downwardapi-volume-ba064df8-4358-479c-87e2-860791b2cb61": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.016169501s STEP: Saw pod success Oct 23 01:28:29.790: INFO: Pod "downwardapi-volume-ba064df8-4358-479c-87e2-860791b2cb61" satisfied condition "Succeeded or Failed" Oct 23 01:28:29.792: INFO: Trying to get logs from node node2 pod downwardapi-volume-ba064df8-4358-479c-87e2-860791b2cb61 container client-container: STEP: delete the pod Oct 23 01:28:29.825: INFO: Waiting for pod downwardapi-volume-ba064df8-4358-479c-87e2-860791b2cb61 to disappear Oct 23 01:28:29.827: INFO: Pod downwardapi-volume-ba064df8-4358-479c-87e2-860791b2cb61 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:28:29.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4051" for this suite. • [SLOW TEST:8.100 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":18,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:28:29.927: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should test the lifecycle of an Endpoint [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating an Endpoint STEP: waiting for available Endpoint STEP: listing all Endpoints STEP: updating the Endpoint STEP: fetching the Endpoint STEP: patching the Endpoint STEP: fetching the Endpoint STEP: deleting the Endpoint by Collection STEP: waiting for Endpoint deletion STEP: fetching the Endpoint [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:28:29.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3294" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • ------------------------------ {"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":-1,"completed":2,"skipped":60,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:28:19.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-f104bb68-a523-441b-a769-9137f94f6529 STEP: Creating a pod to test consume secrets Oct 23 01:28:20.069: INFO: Waiting up to 5m0s for pod "pod-secrets-da2e375b-55dd-4abe-8b35-d1037609f72e" in namespace "secrets-6372" to be "Succeeded or Failed" Oct 23 01:28:20.071: INFO: Pod "pod-secrets-da2e375b-55dd-4abe-8b35-d1037609f72e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.432509ms Oct 23 01:28:22.075: INFO: Pod "pod-secrets-da2e375b-55dd-4abe-8b35-d1037609f72e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006122869s Oct 23 01:28:24.078: INFO: Pod "pod-secrets-da2e375b-55dd-4abe-8b35-d1037609f72e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009027137s Oct 23 01:28:26.082: INFO: Pod "pod-secrets-da2e375b-55dd-4abe-8b35-d1037609f72e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013001425s Oct 23 01:28:28.085: INFO: Pod "pod-secrets-da2e375b-55dd-4abe-8b35-d1037609f72e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.016160926s Oct 23 01:28:30.088: INFO: Pod "pod-secrets-da2e375b-55dd-4abe-8b35-d1037609f72e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.01938165s STEP: Saw pod success Oct 23 01:28:30.088: INFO: Pod "pod-secrets-da2e375b-55dd-4abe-8b35-d1037609f72e" satisfied condition "Succeeded or Failed" Oct 23 01:28:30.091: INFO: Trying to get logs from node node2 pod pod-secrets-da2e375b-55dd-4abe-8b35-d1037609f72e container secret-volume-test: STEP: delete the pod Oct 23 01:28:30.106: INFO: Waiting for pod pod-secrets-da2e375b-55dd-4abe-8b35-d1037609f72e to disappear Oct 23 01:28:30.108: INFO: Pod pod-secrets-da2e375b-55dd-4abe-8b35-d1037609f72e no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:28:30.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6372" for this suite. • [SLOW TEST:10.152 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":19,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:28:19.657: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook W1023 01:28:19.682086 32 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 23 01:28:19.682: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 23 01:28:19.683: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 23 01:28:20.032: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 23 01:28:22.042: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549300, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549300, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549300, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549300, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 23 01:28:24.045: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549300, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549300, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549300, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549300, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 23 01:28:26.045: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549300, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549300, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549300, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549300, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 23 01:28:28.044: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549300, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549300, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549300, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549300, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 23 01:28:31.051: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:28:31.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-499" for this suite. STEP: Destroying namespace "webhook-499-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:11.561 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":1,"skipped":14,"failed":0} S ------------------------------ [BeforeEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:28:19.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency W1023 01:28:19.690947 35 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 23 01:28:19.691: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 23 01:28:19.692: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 01:28:19.694: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-965 I1023 01:28:19.715749 35 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-965, replica count: 1 I1023 01:28:20.767421 35 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1023 01:28:21.767852 35 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1023 01:28:22.768153 35 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1023 01:28:23.768592 35 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1023 01:28:24.768953 35 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 23 01:28:24.875: INFO: Created: latency-svc-zs7tx Oct 23 01:28:24.884: INFO: Got endpoints: latency-svc-zs7tx [15.188161ms] Oct 23 01:28:24.890: INFO: Created: latency-svc-4npql Oct 23 01:28:24.892: INFO: Got endpoints: latency-svc-4npql [7.827865ms] Oct 23 01:28:24.892: INFO: Created: latency-svc-lt98b Oct 23 01:28:24.895: INFO: Got endpoints: latency-svc-lt98b [10.906469ms] Oct 23 01:28:24.897: INFO: Created: latency-svc-vtw24 Oct 23 01:28:24.899: INFO: Got endpoints: latency-svc-vtw24 [14.981389ms] Oct 23 01:28:24.900: INFO: Created: latency-svc-prtmp Oct 23 01:28:24.902: INFO: Got endpoints: latency-svc-prtmp [17.745461ms] Oct 23 01:28:24.902: INFO: Created: latency-svc-49wbf Oct 23 01:28:24.904: INFO: Got endpoints: latency-svc-49wbf [20.077434ms] Oct 23 01:28:24.905: INFO: Created: latency-svc-l7l5z Oct 23 01:28:24.907: INFO: Got endpoints: latency-svc-l7l5z [7.938826ms] Oct 23 01:28:24.908: INFO: Created: latency-svc-w6spg Oct 23 01:28:24.910: INFO: Created: latency-svc-v4xv5 Oct 23 01:28:24.910: INFO: Got endpoints: latency-svc-w6spg [26.013938ms] Oct 23 01:28:24.912: INFO: Got endpoints: latency-svc-v4xv5 [27.945867ms] Oct 23 01:28:24.913: INFO: Created: latency-svc-dqz84 Oct 23 01:28:24.915: INFO: Got endpoints: latency-svc-dqz84 [30.829759ms] Oct 23 01:28:24.916: INFO: Created: latency-svc-km8lg Oct 23 01:28:24.918: INFO: Got endpoints: latency-svc-km8lg [33.44164ms] Oct 23 01:28:24.919: INFO: Created: latency-svc-k26l6 Oct 23 01:28:24.920: INFO: Got endpoints: latency-svc-k26l6 [35.678373ms] Oct 23 01:28:24.922: INFO: Created: latency-svc-jfqzj Oct 23 01:28:24.924: INFO: Got endpoints: latency-svc-jfqzj [39.441886ms] Oct 23 01:28:24.925: INFO: Created: latency-svc-dw5sd Oct 23 01:28:24.927: INFO: Got endpoints: latency-svc-dw5sd [42.579181ms] Oct 23 01:28:24.927: INFO: Created: latency-svc-9nb8h Oct 23 01:28:24.930: INFO: Got endpoints: latency-svc-9nb8h [44.942514ms] Oct 23 01:28:24.930: INFO: Created: latency-svc-7gbwk Oct 23 01:28:24.932: INFO: Got endpoints: latency-svc-7gbwk [47.848378ms] Oct 23 01:28:24.933: INFO: Created: latency-svc-72ghn Oct 23 01:28:24.934: INFO: Got endpoints: latency-svc-72ghn [49.801237ms] Oct 23 01:28:24.935: INFO: Created: latency-svc-rvbv4 Oct 23 01:28:24.937: INFO: Got endpoints: latency-svc-rvbv4 [45.24705ms] Oct 23 01:28:24.938: INFO: Created: latency-svc-zg4w7 Oct 23 01:28:24.940: INFO: Got endpoints: latency-svc-zg4w7 [45.09297ms] Oct 23 01:28:24.941: INFO: Created: latency-svc-57hbz Oct 23 01:28:24.943: INFO: Got endpoints: latency-svc-57hbz [40.719727ms] Oct 23 01:28:24.945: INFO: Created: latency-svc-kqlj4 Oct 23 01:28:24.947: INFO: Got endpoints: latency-svc-kqlj4 [42.130311ms] Oct 23 01:28:24.948: INFO: Created: latency-svc-bkrb8 Oct 23 01:28:24.949: INFO: Got endpoints: latency-svc-bkrb8 [42.022609ms] Oct 23 01:28:24.950: INFO: Created: latency-svc-c8bxw Oct 23 01:28:24.953: INFO: Got endpoints: latency-svc-c8bxw [42.077535ms] Oct 23 01:28:24.953: INFO: Created: latency-svc-6fgcm Oct 23 01:28:24.955: INFO: Got endpoints: latency-svc-6fgcm [42.97154ms] Oct 23 01:28:24.956: INFO: Created: latency-svc-w7nn8 Oct 23 01:28:24.958: INFO: Got endpoints: latency-svc-w7nn8 [42.999981ms] Oct 23 01:28:24.960: INFO: Created: latency-svc-65b7v Oct 23 01:28:24.962: INFO: Got endpoints: latency-svc-65b7v [43.659722ms] Oct 23 01:28:24.962: INFO: Created: latency-svc-9ls6d Oct 23 01:28:24.965: INFO: Created: latency-svc-tfsg6 Oct 23 01:28:24.966: INFO: Got endpoints: latency-svc-9ls6d [45.293166ms] Oct 23 01:28:24.968: INFO: Got endpoints: latency-svc-tfsg6 [44.372959ms] Oct 23 01:28:24.969: INFO: Created: latency-svc-vxp5b Oct 23 01:28:24.972: INFO: Got endpoints: latency-svc-vxp5b [45.219519ms] Oct 23 01:28:24.972: INFO: Created: latency-svc-lv5t2 Oct 23 01:28:24.975: INFO: Got endpoints: latency-svc-lv5t2 [45.809127ms] Oct 23 01:28:24.976: INFO: Created: latency-svc-ngxg5 Oct 23 01:28:24.977: INFO: Got endpoints: latency-svc-ngxg5 [44.987006ms] Oct 23 01:28:24.979: INFO: Created: latency-svc-mbqfz Oct 23 01:28:24.980: INFO: Got endpoints: latency-svc-mbqfz [45.661869ms] Oct 23 01:28:24.982: INFO: Created: latency-svc-h7kqt Oct 23 01:28:24.985: INFO: Created: latency-svc-rj4gm Oct 23 01:28:24.987: INFO: Created: latency-svc-pbrkw Oct 23 01:28:24.989: INFO: Created: latency-svc-lsv7s Oct 23 01:28:24.993: INFO: Created: latency-svc-sw9b2 Oct 23 01:28:24.995: INFO: Created: latency-svc-l6kfx Oct 23 01:28:24.996: INFO: Created: latency-svc-2bjm6 Oct 23 01:28:24.999: INFO: Created: latency-svc-jq925 Oct 23 01:28:25.001: INFO: Created: latency-svc-vv26c Oct 23 01:28:25.004: INFO: Created: latency-svc-b427s Oct 23 01:28:25.007: INFO: Created: latency-svc-t5fwm Oct 23 01:28:25.009: INFO: Created: latency-svc-zgpz4 Oct 23 01:28:25.012: INFO: Created: latency-svc-lb8v9 Oct 23 01:28:25.015: INFO: Created: latency-svc-w2gvk Oct 23 01:28:25.017: INFO: Created: latency-svc-kwl8m Oct 23 01:28:25.029: INFO: Got endpoints: latency-svc-h7kqt [92.118955ms] Oct 23 01:28:25.038: INFO: Created: latency-svc-dpqrz Oct 23 01:28:25.078: INFO: Got endpoints: latency-svc-rj4gm [137.592603ms] Oct 23 01:28:25.083: INFO: Created: latency-svc-rkf55 Oct 23 01:28:25.129: INFO: Got endpoints: latency-svc-pbrkw [185.794849ms] Oct 23 01:28:25.134: INFO: Created: latency-svc-vqj99 Oct 23 01:28:25.178: INFO: Got endpoints: latency-svc-lsv7s [231.614999ms] Oct 23 01:28:25.184: INFO: Created: latency-svc-rr5ww Oct 23 01:28:25.228: INFO: Got endpoints: latency-svc-sw9b2 [278.81781ms] Oct 23 01:28:25.234: INFO: Created: latency-svc-hqqvx Oct 23 01:28:25.279: INFO: Got endpoints: latency-svc-l6kfx [326.869172ms] Oct 23 01:28:25.287: INFO: Created: latency-svc-5bt6b Oct 23 01:28:25.328: INFO: Got endpoints: latency-svc-2bjm6 [372.591055ms] Oct 23 01:28:25.333: INFO: Created: latency-svc-mvclb Oct 23 01:28:25.379: INFO: Got endpoints: latency-svc-jq925 [420.18839ms] Oct 23 01:28:25.386: INFO: Created: latency-svc-thrmb Oct 23 01:28:25.428: INFO: Got endpoints: latency-svc-vv26c [466.497336ms] Oct 23 01:28:25.434: INFO: Created: latency-svc-vj2zh Oct 23 01:28:25.479: INFO: Got endpoints: latency-svc-b427s [513.026948ms] Oct 23 01:28:25.488: INFO: Created: latency-svc-swq52 Oct 23 01:28:25.529: INFO: Got endpoints: latency-svc-t5fwm [560.571587ms] Oct 23 01:28:25.535: INFO: Created: latency-svc-6hdvs Oct 23 01:28:25.579: INFO: Got endpoints: latency-svc-zgpz4 [606.707617ms] Oct 23 01:28:25.584: INFO: Created: latency-svc-59lx2 Oct 23 01:28:25.628: INFO: Got endpoints: latency-svc-lb8v9 [652.694523ms] Oct 23 01:28:25.634: INFO: Created: latency-svc-qngmb Oct 23 01:28:25.679: INFO: Got endpoints: latency-svc-w2gvk [701.328551ms] Oct 23 01:28:25.685: INFO: Created: latency-svc-zk9wz Oct 23 01:28:25.729: INFO: Got endpoints: latency-svc-kwl8m [748.464948ms] Oct 23 01:28:25.734: INFO: Created: latency-svc-mqb49 Oct 23 01:28:25.779: INFO: Got endpoints: latency-svc-dpqrz [749.83502ms] Oct 23 01:28:25.785: INFO: Created: latency-svc-8pn9b Oct 23 01:28:25.828: INFO: Got endpoints: latency-svc-rkf55 [750.454199ms] Oct 23 01:28:25.834: INFO: Created: latency-svc-8nt6w Oct 23 01:28:25.879: INFO: Got endpoints: latency-svc-vqj99 [750.588523ms] Oct 23 01:28:25.884: INFO: Created: latency-svc-nwmq5 Oct 23 01:28:25.928: INFO: Got endpoints: latency-svc-rr5ww [749.532814ms] Oct 23 01:28:25.933: INFO: Created: latency-svc-j95dg Oct 23 01:28:25.979: INFO: Got endpoints: latency-svc-hqqvx [750.770034ms] Oct 23 01:28:25.988: INFO: Created: latency-svc-vmppq Oct 23 01:28:26.029: INFO: Got endpoints: latency-svc-5bt6b [749.979891ms] Oct 23 01:28:26.035: INFO: Created: latency-svc-wngnz Oct 23 01:28:26.081: INFO: Got endpoints: latency-svc-mvclb [752.746651ms] Oct 23 01:28:26.086: INFO: Created: latency-svc-pxh6h Oct 23 01:28:26.128: INFO: Got endpoints: latency-svc-thrmb [749.542839ms] Oct 23 01:28:26.133: INFO: Created: latency-svc-h9fs2 Oct 23 01:28:26.179: INFO: Got endpoints: latency-svc-vj2zh [750.671947ms] Oct 23 01:28:26.184: INFO: Created: latency-svc-9bvqs Oct 23 01:28:26.228: INFO: Got endpoints: latency-svc-swq52 [749.568997ms] Oct 23 01:28:26.234: INFO: Created: latency-svc-8z7f9 Oct 23 01:28:26.279: INFO: Got endpoints: latency-svc-6hdvs [750.109295ms] Oct 23 01:28:26.286: INFO: Created: latency-svc-fhtp4 Oct 23 01:28:26.329: INFO: Got endpoints: latency-svc-59lx2 [749.836112ms] Oct 23 01:28:26.336: INFO: Created: latency-svc-dvpm9 Oct 23 01:28:26.379: INFO: Got endpoints: latency-svc-qngmb [750.785605ms] Oct 23 01:28:26.384: INFO: Created: latency-svc-kmlsc Oct 23 01:28:26.429: INFO: Got endpoints: latency-svc-zk9wz [750.049646ms] Oct 23 01:28:26.434: INFO: Created: latency-svc-kmwbr Oct 23 01:28:26.479: INFO: Got endpoints: latency-svc-mqb49 [750.248633ms] Oct 23 01:28:26.484: INFO: Created: latency-svc-knp2j Oct 23 01:28:26.529: INFO: Got endpoints: latency-svc-8pn9b [749.708826ms] Oct 23 01:28:26.534: INFO: Created: latency-svc-8fll2 Oct 23 01:28:26.579: INFO: Got endpoints: latency-svc-8nt6w [750.411926ms] Oct 23 01:28:26.584: INFO: Created: latency-svc-l49sl Oct 23 01:28:26.628: INFO: Got endpoints: latency-svc-nwmq5 [749.156959ms] Oct 23 01:28:26.634: INFO: Created: latency-svc-2mcq8 Oct 23 01:28:26.679: INFO: Got endpoints: latency-svc-j95dg [751.107668ms] Oct 23 01:28:26.685: INFO: Created: latency-svc-v2zqf Oct 23 01:28:26.728: INFO: Got endpoints: latency-svc-vmppq [749.191674ms] Oct 23 01:28:26.734: INFO: Created: latency-svc-6567z Oct 23 01:28:26.778: INFO: Got endpoints: latency-svc-wngnz [748.498766ms] Oct 23 01:28:26.785: INFO: Created: latency-svc-lzhfx Oct 23 01:28:26.828: INFO: Got endpoints: latency-svc-pxh6h [747.339366ms] Oct 23 01:28:26.834: INFO: Created: latency-svc-k2hgc Oct 23 01:28:26.878: INFO: Got endpoints: latency-svc-h9fs2 [749.77225ms] Oct 23 01:28:26.883: INFO: Created: latency-svc-w2gw6 Oct 23 01:28:26.930: INFO: Got endpoints: latency-svc-9bvqs [750.819243ms] Oct 23 01:28:26.936: INFO: Created: latency-svc-jrzvx Oct 23 01:28:26.979: INFO: Got endpoints: latency-svc-8z7f9 [750.676757ms] Oct 23 01:28:26.984: INFO: Created: latency-svc-kmxxs Oct 23 01:28:27.029: INFO: Got endpoints: latency-svc-fhtp4 [749.46563ms] Oct 23 01:28:27.035: INFO: Created: latency-svc-lqtrd Oct 23 01:28:27.079: INFO: Got endpoints: latency-svc-dvpm9 [750.099511ms] Oct 23 01:28:27.085: INFO: Created: latency-svc-hp86t Oct 23 01:28:27.130: INFO: Got endpoints: latency-svc-kmlsc [750.472839ms] Oct 23 01:28:27.136: INFO: Created: latency-svc-wt42q Oct 23 01:28:27.179: INFO: Got endpoints: latency-svc-kmwbr [750.058716ms] Oct 23 01:28:27.185: INFO: Created: latency-svc-r9v2k Oct 23 01:28:27.230: INFO: Got endpoints: latency-svc-knp2j [750.842177ms] Oct 23 01:28:27.235: INFO: Created: latency-svc-bb5xl Oct 23 01:28:27.279: INFO: Got endpoints: latency-svc-8fll2 [749.414087ms] Oct 23 01:28:27.283: INFO: Created: latency-svc-86ddj Oct 23 01:28:27.328: INFO: Got endpoints: latency-svc-l49sl [749.335062ms] Oct 23 01:28:27.334: INFO: Created: latency-svc-nxbsz Oct 23 01:28:27.379: INFO: Got endpoints: latency-svc-2mcq8 [750.323573ms] Oct 23 01:28:27.384: INFO: Created: latency-svc-jfzcc Oct 23 01:28:27.429: INFO: Got endpoints: latency-svc-v2zqf [749.682151ms] Oct 23 01:28:27.434: INFO: Created: latency-svc-8mj68 Oct 23 01:28:27.478: INFO: Got endpoints: latency-svc-6567z [749.922463ms] Oct 23 01:28:27.483: INFO: Created: latency-svc-klh8q Oct 23 01:28:27.529: INFO: Got endpoints: latency-svc-lzhfx [750.767285ms] Oct 23 01:28:27.535: INFO: Created: latency-svc-lzgv4 Oct 23 01:28:27.579: INFO: Got endpoints: latency-svc-k2hgc [751.142368ms] Oct 23 01:28:27.586: INFO: Created: latency-svc-bdrr2 Oct 23 01:28:27.628: INFO: Got endpoints: latency-svc-w2gw6 [750.093847ms] Oct 23 01:28:27.635: INFO: Created: latency-svc-pk7c9 Oct 23 01:28:27.679: INFO: Got endpoints: latency-svc-jrzvx [748.892244ms] Oct 23 01:28:27.684: INFO: Created: latency-svc-zfwzf Oct 23 01:28:27.731: INFO: Got endpoints: latency-svc-kmxxs [751.765024ms] Oct 23 01:28:27.737: INFO: Created: latency-svc-8kkw5 Oct 23 01:28:27.779: INFO: Got endpoints: latency-svc-lqtrd [750.292603ms] Oct 23 01:28:27.786: INFO: Created: latency-svc-kbwwh Oct 23 01:28:27.829: INFO: Got endpoints: latency-svc-hp86t [749.684315ms] Oct 23 01:28:27.835: INFO: Created: latency-svc-hz7nx Oct 23 01:28:27.878: INFO: Got endpoints: latency-svc-wt42q [748.429508ms] Oct 23 01:28:27.883: INFO: Created: latency-svc-bt274 Oct 23 01:28:27.929: INFO: Got endpoints: latency-svc-r9v2k [750.155011ms] Oct 23 01:28:27.935: INFO: Created: latency-svc-d8lz2 Oct 23 01:28:27.980: INFO: Got endpoints: latency-svc-bb5xl [749.865026ms] Oct 23 01:28:27.986: INFO: Created: latency-svc-jrp58 Oct 23 01:28:28.029: INFO: Got endpoints: latency-svc-86ddj [750.210256ms] Oct 23 01:28:28.034: INFO: Created: latency-svc-5fbdw Oct 23 01:28:28.079: INFO: Got endpoints: latency-svc-nxbsz [750.47184ms] Oct 23 01:28:28.084: INFO: Created: latency-svc-7hj8r Oct 23 01:28:28.130: INFO: Got endpoints: latency-svc-jfzcc [750.965079ms] Oct 23 01:28:28.136: INFO: Created: latency-svc-jm264 Oct 23 01:28:28.179: INFO: Got endpoints: latency-svc-8mj68 [749.998165ms] Oct 23 01:28:28.184: INFO: Created: latency-svc-48zhw Oct 23 01:28:28.229: INFO: Got endpoints: latency-svc-klh8q [750.841203ms] Oct 23 01:28:28.236: INFO: Created: latency-svc-665lx Oct 23 01:28:28.279: INFO: Got endpoints: latency-svc-lzgv4 [750.072492ms] Oct 23 01:28:28.285: INFO: Created: latency-svc-x96kz Oct 23 01:28:28.328: INFO: Got endpoints: latency-svc-bdrr2 [748.972005ms] Oct 23 01:28:28.335: INFO: Created: latency-svc-rn99r Oct 23 01:28:28.379: INFO: Got endpoints: latency-svc-pk7c9 [750.78997ms] Oct 23 01:28:28.386: INFO: Created: latency-svc-j5b4l Oct 23 01:28:28.428: INFO: Got endpoints: latency-svc-zfwzf [749.568937ms] Oct 23 01:28:28.434: INFO: Created: latency-svc-4g5w2 Oct 23 01:28:28.479: INFO: Got endpoints: latency-svc-8kkw5 [747.825336ms] Oct 23 01:28:28.484: INFO: Created: latency-svc-bx7cj Oct 23 01:28:28.528: INFO: Got endpoints: latency-svc-kbwwh [749.293479ms] Oct 23 01:28:28.534: INFO: Created: latency-svc-pmsdk Oct 23 01:28:28.579: INFO: Got endpoints: latency-svc-hz7nx [750.119036ms] Oct 23 01:28:28.585: INFO: Created: latency-svc-v9prx Oct 23 01:28:28.629: INFO: Got endpoints: latency-svc-bt274 [751.291368ms] Oct 23 01:28:28.635: INFO: Created: latency-svc-j26wt Oct 23 01:28:28.678: INFO: Got endpoints: latency-svc-d8lz2 [749.032624ms] Oct 23 01:28:28.685: INFO: Created: latency-svc-wtnpf Oct 23 01:28:28.729: INFO: Got endpoints: latency-svc-jrp58 [748.681209ms] Oct 23 01:28:28.735: INFO: Created: latency-svc-z6jkq Oct 23 01:28:28.779: INFO: Got endpoints: latency-svc-5fbdw [749.95792ms] Oct 23 01:28:28.784: INFO: Created: latency-svc-x6667 Oct 23 01:28:28.828: INFO: Got endpoints: latency-svc-7hj8r [749.313275ms] Oct 23 01:28:28.835: INFO: Created: latency-svc-fqfbv Oct 23 01:28:28.880: INFO: Got endpoints: latency-svc-jm264 [750.331427ms] Oct 23 01:28:28.886: INFO: Created: latency-svc-47dvk Oct 23 01:28:28.928: INFO: Got endpoints: latency-svc-48zhw [749.456881ms] Oct 23 01:28:28.934: INFO: Created: latency-svc-stj5t Oct 23 01:28:28.978: INFO: Got endpoints: latency-svc-665lx [748.840159ms] Oct 23 01:28:28.984: INFO: Created: latency-svc-x9qcw Oct 23 01:28:29.029: INFO: Got endpoints: latency-svc-x96kz [750.086661ms] Oct 23 01:28:29.035: INFO: Created: latency-svc-tnsvh Oct 23 01:28:29.078: INFO: Got endpoints: latency-svc-rn99r [749.986437ms] Oct 23 01:28:29.085: INFO: Created: latency-svc-9zff8 Oct 23 01:28:29.128: INFO: Got endpoints: latency-svc-j5b4l [749.298715ms] Oct 23 01:28:29.135: INFO: Created: latency-svc-k4tjs Oct 23 01:28:29.179: INFO: Got endpoints: latency-svc-4g5w2 [750.945624ms] Oct 23 01:28:29.186: INFO: Created: latency-svc-58bzb Oct 23 01:28:29.228: INFO: Got endpoints: latency-svc-bx7cj [749.220928ms] Oct 23 01:28:29.234: INFO: Created: latency-svc-nh7rb Oct 23 01:28:29.280: INFO: Got endpoints: latency-svc-pmsdk [751.102732ms] Oct 23 01:28:29.286: INFO: Created: latency-svc-slslh Oct 23 01:28:29.329: INFO: Got endpoints: latency-svc-v9prx [750.093294ms] Oct 23 01:28:29.335: INFO: Created: latency-svc-7lv2h Oct 23 01:28:29.380: INFO: Got endpoints: latency-svc-j26wt [750.093344ms] Oct 23 01:28:29.385: INFO: Created: latency-svc-zrxf7 Oct 23 01:28:29.429: INFO: Got endpoints: latency-svc-wtnpf [750.587948ms] Oct 23 01:28:29.434: INFO: Created: latency-svc-jq5jh Oct 23 01:28:29.480: INFO: Got endpoints: latency-svc-z6jkq [751.130926ms] Oct 23 01:28:29.486: INFO: Created: latency-svc-pw4ps Oct 23 01:28:29.528: INFO: Got endpoints: latency-svc-x6667 [749.618748ms] Oct 23 01:28:29.535: INFO: Created: latency-svc-4wrd6 Oct 23 01:28:29.579: INFO: Got endpoints: latency-svc-fqfbv [751.120461ms] Oct 23 01:28:29.585: INFO: Created: latency-svc-8vh7b Oct 23 01:28:29.629: INFO: Got endpoints: latency-svc-47dvk [748.225582ms] Oct 23 01:28:29.634: INFO: Created: latency-svc-cwvqw Oct 23 01:28:29.678: INFO: Got endpoints: latency-svc-stj5t [750.000599ms] Oct 23 01:28:29.684: INFO: Created: latency-svc-flmkm Oct 23 01:28:29.729: INFO: Got endpoints: latency-svc-x9qcw [750.697291ms] Oct 23 01:28:29.734: INFO: Created: latency-svc-ppbnv Oct 23 01:28:29.780: INFO: Got endpoints: latency-svc-tnsvh [750.403684ms] Oct 23 01:28:29.785: INFO: Created: latency-svc-h6vtw Oct 23 01:28:29.828: INFO: Got endpoints: latency-svc-9zff8 [749.738763ms] Oct 23 01:28:29.833: INFO: Created: latency-svc-xmn27 Oct 23 01:28:29.879: INFO: Got endpoints: latency-svc-k4tjs [750.389433ms] Oct 23 01:28:29.884: INFO: Created: latency-svc-mts26 Oct 23 01:28:29.928: INFO: Got endpoints: latency-svc-58bzb [748.651619ms] Oct 23 01:28:29.934: INFO: Created: latency-svc-fl2gx Oct 23 01:28:29.979: INFO: Got endpoints: latency-svc-nh7rb [750.420517ms] Oct 23 01:28:29.985: INFO: Created: latency-svc-pmshp Oct 23 01:28:30.028: INFO: Got endpoints: latency-svc-slslh [747.932923ms] Oct 23 01:28:30.034: INFO: Created: latency-svc-cx9xf Oct 23 01:28:30.079: INFO: Got endpoints: latency-svc-7lv2h [749.95273ms] Oct 23 01:28:30.085: INFO: Created: latency-svc-ln6v2 Oct 23 01:28:30.129: INFO: Got endpoints: latency-svc-zrxf7 [749.201749ms] Oct 23 01:28:30.135: INFO: Created: latency-svc-dd6xc Oct 23 01:28:30.180: INFO: Got endpoints: latency-svc-jq5jh [750.776114ms] Oct 23 01:28:30.185: INFO: Created: latency-svc-7fb9v Oct 23 01:28:30.229: INFO: Got endpoints: latency-svc-pw4ps [749.652981ms] Oct 23 01:28:30.235: INFO: Created: latency-svc-7s6zw Oct 23 01:28:30.278: INFO: Got endpoints: latency-svc-4wrd6 [749.754896ms] Oct 23 01:28:30.285: INFO: Created: latency-svc-4wfkr Oct 23 01:28:30.329: INFO: Got endpoints: latency-svc-8vh7b [749.741306ms] Oct 23 01:28:30.335: INFO: Created: latency-svc-spp8c Oct 23 01:28:30.378: INFO: Got endpoints: latency-svc-cwvqw [749.800749ms] Oct 23 01:28:30.385: INFO: Created: latency-svc-tcl5d Oct 23 01:28:30.429: INFO: Got endpoints: latency-svc-flmkm [750.582819ms] Oct 23 01:28:30.436: INFO: Created: latency-svc-tb4nt Oct 23 01:28:30.479: INFO: Got endpoints: latency-svc-ppbnv [750.228027ms] Oct 23 01:28:30.485: INFO: Created: latency-svc-kxpjt Oct 23 01:28:30.529: INFO: Got endpoints: latency-svc-h6vtw [749.065228ms] Oct 23 01:28:30.534: INFO: Created: latency-svc-zl646 Oct 23 01:28:30.578: INFO: Got endpoints: latency-svc-xmn27 [750.037958ms] Oct 23 01:28:30.585: INFO: Created: latency-svc-st7hv Oct 23 01:28:30.679: INFO: Got endpoints: latency-svc-mts26 [800.514932ms] Oct 23 01:28:30.684: INFO: Created: latency-svc-bmm7f Oct 23 01:28:30.729: INFO: Got endpoints: latency-svc-fl2gx [801.054243ms] Oct 23 01:28:30.736: INFO: Created: latency-svc-nw7s2 Oct 23 01:28:30.828: INFO: Got endpoints: latency-svc-pmshp [849.53937ms] Oct 23 01:28:30.834: INFO: Created: latency-svc-xp8nb Oct 23 01:28:30.878: INFO: Got endpoints: latency-svc-cx9xf [850.670489ms] Oct 23 01:28:30.885: INFO: Created: latency-svc-l8n6p Oct 23 01:28:30.980: INFO: Got endpoints: latency-svc-ln6v2 [900.326076ms] Oct 23 01:28:30.985: INFO: Created: latency-svc-wkkv9 Oct 23 01:28:31.028: INFO: Got endpoints: latency-svc-dd6xc [899.523762ms] Oct 23 01:28:31.034: INFO: Created: latency-svc-c2mnc Oct 23 01:28:31.078: INFO: Got endpoints: latency-svc-7fb9v [898.33967ms] Oct 23 01:28:31.084: INFO: Created: latency-svc-hpct9 Oct 23 01:28:31.128: INFO: Got endpoints: latency-svc-7s6zw [898.318182ms] Oct 23 01:28:31.135: INFO: Created: latency-svc-wdp92 Oct 23 01:28:31.178: INFO: Got endpoints: latency-svc-4wfkr [899.918214ms] Oct 23 01:28:31.184: INFO: Created: latency-svc-ltz4f Oct 23 01:28:31.228: INFO: Got endpoints: latency-svc-spp8c [899.156093ms] Oct 23 01:28:31.234: INFO: Created: latency-svc-pxhsw Oct 23 01:28:31.279: INFO: Got endpoints: latency-svc-tcl5d [900.7483ms] Oct 23 01:28:31.284: INFO: Created: latency-svc-bl4f8 Oct 23 01:28:31.329: INFO: Got endpoints: latency-svc-tb4nt [900.274122ms] Oct 23 01:28:31.335: INFO: Created: latency-svc-jnb4l Oct 23 01:28:31.379: INFO: Got endpoints: latency-svc-kxpjt [899.841748ms] Oct 23 01:28:31.386: INFO: Created: latency-svc-bjtz2 Oct 23 01:28:31.430: INFO: Got endpoints: latency-svc-zl646 [900.966984ms] Oct 23 01:28:31.434: INFO: Created: latency-svc-gtlfs Oct 23 01:28:31.478: INFO: Got endpoints: latency-svc-st7hv [900.036969ms] Oct 23 01:28:31.485: INFO: Created: latency-svc-mh8dn Oct 23 01:28:31.529: INFO: Got endpoints: latency-svc-bmm7f [849.523389ms] Oct 23 01:28:31.536: INFO: Created: latency-svc-v8pqj Oct 23 01:28:31.578: INFO: Got endpoints: latency-svc-nw7s2 [848.664047ms] Oct 23 01:28:31.584: INFO: Created: latency-svc-vjh22 Oct 23 01:28:31.629: INFO: Got endpoints: latency-svc-xp8nb [800.98913ms] Oct 23 01:28:31.634: INFO: Created: latency-svc-krs9l Oct 23 01:28:31.679: INFO: Got endpoints: latency-svc-l8n6p [800.436517ms] Oct 23 01:28:31.685: INFO: Created: latency-svc-77wp4 Oct 23 01:28:31.729: INFO: Got endpoints: latency-svc-wkkv9 [749.399752ms] Oct 23 01:28:31.735: INFO: Created: latency-svc-xp47r Oct 23 01:28:31.779: INFO: Got endpoints: latency-svc-c2mnc [750.733559ms] Oct 23 01:28:31.784: INFO: Created: latency-svc-wtd4l Oct 23 01:28:31.828: INFO: Got endpoints: latency-svc-hpct9 [749.596093ms] Oct 23 01:28:31.834: INFO: Created: latency-svc-jg2gm Oct 23 01:28:31.879: INFO: Got endpoints: latency-svc-wdp92 [750.737167ms] Oct 23 01:28:31.885: INFO: Created: latency-svc-vz8mc Oct 23 01:28:31.930: INFO: Got endpoints: latency-svc-ltz4f [751.212966ms] Oct 23 01:28:31.934: INFO: Created: latency-svc-fv5cc Oct 23 01:28:32.029: INFO: Got endpoints: latency-svc-pxhsw [800.706315ms] Oct 23 01:28:32.036: INFO: Created: latency-svc-xvskc Oct 23 01:28:32.079: INFO: Got endpoints: latency-svc-bl4f8 [799.559773ms] Oct 23 01:28:32.084: INFO: Created: latency-svc-mddnf Oct 23 01:28:32.129: INFO: Got endpoints: latency-svc-jnb4l [799.45539ms] Oct 23 01:28:32.135: INFO: Created: latency-svc-848lk Oct 23 01:28:32.179: INFO: Got endpoints: latency-svc-bjtz2 [799.929723ms] Oct 23 01:28:32.185: INFO: Created: latency-svc-g7m8g Oct 23 01:28:32.229: INFO: Got endpoints: latency-svc-gtlfs [799.329608ms] Oct 23 01:28:32.234: INFO: Created: latency-svc-r5hnn Oct 23 01:28:32.279: INFO: Got endpoints: latency-svc-mh8dn [800.562589ms] Oct 23 01:28:32.285: INFO: Created: latency-svc-9rsl5 Oct 23 01:28:32.329: INFO: Got endpoints: latency-svc-v8pqj [799.568203ms] Oct 23 01:28:32.335: INFO: Created: latency-svc-qv89v Oct 23 01:28:32.379: INFO: Got endpoints: latency-svc-vjh22 [800.626451ms] Oct 23 01:28:32.385: INFO: Created: latency-svc-hhhsz Oct 23 01:28:32.429: INFO: Got endpoints: latency-svc-krs9l [799.982764ms] Oct 23 01:28:32.435: INFO: Created: latency-svc-d897l Oct 23 01:28:32.479: INFO: Got endpoints: latency-svc-77wp4 [800.19157ms] Oct 23 01:28:32.485: INFO: Created: latency-svc-q5xvp Oct 23 01:28:32.529: INFO: Got endpoints: latency-svc-xp47r [799.939424ms] Oct 23 01:28:32.534: INFO: Created: latency-svc-gr5fj Oct 23 01:28:32.578: INFO: Got endpoints: latency-svc-wtd4l [799.115209ms] Oct 23 01:28:32.584: INFO: Created: latency-svc-82tcs Oct 23 01:28:32.630: INFO: Got endpoints: latency-svc-jg2gm [801.632508ms] Oct 23 01:28:32.636: INFO: Created: latency-svc-gvdg7 Oct 23 01:28:32.679: INFO: Got endpoints: latency-svc-vz8mc [799.987557ms] Oct 23 01:28:32.684: INFO: Created: latency-svc-lk5g2 Oct 23 01:28:32.732: INFO: Got endpoints: latency-svc-fv5cc [802.360555ms] Oct 23 01:28:32.744: INFO: Created: latency-svc-b7bg2 Oct 23 01:28:32.778: INFO: Got endpoints: latency-svc-xvskc [749.245794ms] Oct 23 01:28:32.785: INFO: Created: latency-svc-75z2v Oct 23 01:28:32.829: INFO: Got endpoints: latency-svc-mddnf [750.282948ms] Oct 23 01:28:32.834: INFO: Created: latency-svc-r7sgp Oct 23 01:28:32.879: INFO: Got endpoints: latency-svc-848lk [749.67555ms] Oct 23 01:28:32.884: INFO: Created: latency-svc-pgbml Oct 23 01:28:32.929: INFO: Got endpoints: latency-svc-g7m8g [749.562647ms] Oct 23 01:28:32.980: INFO: Got endpoints: latency-svc-r5hnn [750.867584ms] Oct 23 01:28:33.029: INFO: Got endpoints: latency-svc-9rsl5 [749.488756ms] Oct 23 01:28:33.079: INFO: Got endpoints: latency-svc-qv89v [750.172556ms] Oct 23 01:28:33.129: INFO: Got endpoints: latency-svc-hhhsz [749.898426ms] Oct 23 01:28:33.179: INFO: Got endpoints: latency-svc-d897l [749.369156ms] Oct 23 01:28:33.229: INFO: Got endpoints: latency-svc-q5xvp [749.993889ms] Oct 23 01:28:33.279: INFO: Got endpoints: latency-svc-gr5fj [749.534415ms] Oct 23 01:28:33.330: INFO: Got endpoints: latency-svc-82tcs [751.281237ms] Oct 23 01:28:33.379: INFO: Got endpoints: latency-svc-gvdg7 [749.558219ms] Oct 23 01:28:33.429: INFO: Got endpoints: latency-svc-lk5g2 [750.166845ms] Oct 23 01:28:33.479: INFO: Got endpoints: latency-svc-b7bg2 [747.246536ms] Oct 23 01:28:33.528: INFO: Got endpoints: latency-svc-75z2v [749.935666ms] Oct 23 01:28:33.579: INFO: Got endpoints: latency-svc-r7sgp [750.030861ms] Oct 23 01:28:33.629: INFO: Got endpoints: latency-svc-pgbml [750.066888ms] Oct 23 01:28:33.629: INFO: Latencies: [7.827865ms 7.938826ms 10.906469ms 14.981389ms 17.745461ms 20.077434ms 26.013938ms 27.945867ms 30.829759ms 33.44164ms 35.678373ms 39.441886ms 40.719727ms 42.022609ms 42.077535ms 42.130311ms 42.579181ms 42.97154ms 42.999981ms 43.659722ms 44.372959ms 44.942514ms 44.987006ms 45.09297ms 45.219519ms 45.24705ms 45.293166ms 45.661869ms 45.809127ms 47.848378ms 49.801237ms 92.118955ms 137.592603ms 185.794849ms 231.614999ms 278.81781ms 326.869172ms 372.591055ms 420.18839ms 466.497336ms 513.026948ms 560.571587ms 606.707617ms 652.694523ms 701.328551ms 747.246536ms 747.339366ms 747.825336ms 747.932923ms 748.225582ms 748.429508ms 748.464948ms 748.498766ms 748.651619ms 748.681209ms 748.840159ms 748.892244ms 748.972005ms 749.032624ms 749.065228ms 749.156959ms 749.191674ms 749.201749ms 749.220928ms 749.245794ms 749.293479ms 749.298715ms 749.313275ms 749.335062ms 749.369156ms 749.399752ms 749.414087ms 749.456881ms 749.46563ms 749.488756ms 749.532814ms 749.534415ms 749.542839ms 749.558219ms 749.562647ms 749.568937ms 749.568997ms 749.596093ms 749.618748ms 749.652981ms 749.67555ms 749.682151ms 749.684315ms 749.708826ms 749.738763ms 749.741306ms 749.754896ms 749.77225ms 749.800749ms 749.83502ms 749.836112ms 749.865026ms 749.898426ms 749.922463ms 749.935666ms 749.95273ms 749.95792ms 749.979891ms 749.986437ms 749.993889ms 749.998165ms 750.000599ms 750.030861ms 750.037958ms 750.049646ms 750.058716ms 750.066888ms 750.072492ms 750.086661ms 750.093294ms 750.093344ms 750.093847ms 750.099511ms 750.109295ms 750.119036ms 750.155011ms 750.166845ms 750.172556ms 750.210256ms 750.228027ms 750.248633ms 750.282948ms 750.292603ms 750.323573ms 750.331427ms 750.389433ms 750.403684ms 750.411926ms 750.420517ms 750.454199ms 750.47184ms 750.472839ms 750.582819ms 750.587948ms 750.588523ms 750.671947ms 750.676757ms 750.697291ms 750.733559ms 750.737167ms 750.767285ms 750.770034ms 750.776114ms 750.785605ms 750.78997ms 750.819243ms 750.841203ms 750.842177ms 750.867584ms 750.945624ms 750.965079ms 751.102732ms 751.107668ms 751.120461ms 751.130926ms 751.142368ms 751.212966ms 751.281237ms 751.291368ms 751.765024ms 752.746651ms 799.115209ms 799.329608ms 799.45539ms 799.559773ms 799.568203ms 799.929723ms 799.939424ms 799.982764ms 799.987557ms 800.19157ms 800.436517ms 800.514932ms 800.562589ms 800.626451ms 800.706315ms 800.98913ms 801.054243ms 801.632508ms 802.360555ms 848.664047ms 849.523389ms 849.53937ms 850.670489ms 898.318182ms 898.33967ms 899.156093ms 899.523762ms 899.841748ms 899.918214ms 900.036969ms 900.274122ms 900.326076ms 900.7483ms 900.966984ms] Oct 23 01:28:33.629: INFO: 50 %ile: 749.95273ms Oct 23 01:28:33.629: INFO: 90 %ile: 800.706315ms Oct 23 01:28:33.629: INFO: 99 %ile: 900.7483ms Oct 23 01:28:33.629: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:28:33.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-965" for this suite. • [SLOW TEST:13.977 seconds] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":-1,"completed":1,"skipped":20,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:28:30.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Oct 23 01:28:30.226: INFO: Waiting up to 5m0s for pod "downwardapi-volume-00220a43-82cc-4abb-ae31-f46deb5b7cc0" in namespace "projected-8068" to be "Succeeded or Failed" Oct 23 01:28:30.229: INFO: Pod "downwardapi-volume-00220a43-82cc-4abb-ae31-f46deb5b7cc0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.805864ms Oct 23 01:28:32.231: INFO: Pod "downwardapi-volume-00220a43-82cc-4abb-ae31-f46deb5b7cc0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005370174s Oct 23 01:28:34.235: INFO: Pod "downwardapi-volume-00220a43-82cc-4abb-ae31-f46deb5b7cc0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009292197s STEP: Saw pod success Oct 23 01:28:34.235: INFO: Pod "downwardapi-volume-00220a43-82cc-4abb-ae31-f46deb5b7cc0" satisfied condition "Succeeded or Failed" Oct 23 01:28:34.238: INFO: Trying to get logs from node node1 pod downwardapi-volume-00220a43-82cc-4abb-ae31-f46deb5b7cc0 container client-container: STEP: delete the pod Oct 23 01:28:34.251: INFO: Waiting for pod downwardapi-volume-00220a43-82cc-4abb-ae31-f46deb5b7cc0 to disappear Oct 23 01:28:34.253: INFO: Pod downwardapi-volume-00220a43-82cc-4abb-ae31-f46deb5b7cc0 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:28:34.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8068" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":50,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:28:34.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-a5236547-5e2b-449d-9a9e-70e9d283794f STEP: Creating a pod to test consume secrets Oct 23 01:28:34.334: INFO: Waiting up to 5m0s for pod "pod-secrets-dca4ab12-2043-42a0-8dbe-5a57f3492c73" in namespace "secrets-1750" to be "Succeeded or Failed" Oct 23 01:28:34.336: INFO: Pod "pod-secrets-dca4ab12-2043-42a0-8dbe-5a57f3492c73": Phase="Pending", Reason="", readiness=false. Elapsed: 1.742215ms Oct 23 01:28:36.341: INFO: Pod "pod-secrets-dca4ab12-2043-42a0-8dbe-5a57f3492c73": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007169176s Oct 23 01:28:38.346: INFO: Pod "pod-secrets-dca4ab12-2043-42a0-8dbe-5a57f3492c73": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011705591s STEP: Saw pod success Oct 23 01:28:38.346: INFO: Pod "pod-secrets-dca4ab12-2043-42a0-8dbe-5a57f3492c73" satisfied condition "Succeeded or Failed" Oct 23 01:28:38.348: INFO: Trying to get logs from node node1 pod pod-secrets-dca4ab12-2043-42a0-8dbe-5a57f3492c73 container secret-volume-test: STEP: delete the pod Oct 23 01:28:38.360: INFO: Waiting for pod pod-secrets-dca4ab12-2043-42a0-8dbe-5a57f3492c73 to disappear Oct 23 01:28:38.363: INFO: Pod pod-secrets-dca4ab12-2043-42a0-8dbe-5a57f3492c73 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:28:38.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1750" for this suite. STEP: Destroying namespace "secret-namespace-8388" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":54,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:28:33.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating secret secrets-2981/secret-test-d8a75d68-decf-432f-b901-b19be56da3bb STEP: Creating a pod to test consume secrets Oct 23 01:28:33.760: INFO: Waiting up to 5m0s for pod "pod-configmaps-0c615fe5-ab22-41ac-b0b4-2a38734138ce" in namespace "secrets-2981" to be "Succeeded or Failed" Oct 23 01:28:33.762: INFO: Pod "pod-configmaps-0c615fe5-ab22-41ac-b0b4-2a38734138ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.260744ms Oct 23 01:28:35.766: INFO: Pod "pod-configmaps-0c615fe5-ab22-41ac-b0b4-2a38734138ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005932808s Oct 23 01:28:37.771: INFO: Pod "pod-configmaps-0c615fe5-ab22-41ac-b0b4-2a38734138ce": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011731924s Oct 23 01:28:39.775: INFO: Pod "pod-configmaps-0c615fe5-ab22-41ac-b0b4-2a38734138ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015414622s STEP: Saw pod success Oct 23 01:28:39.775: INFO: Pod "pod-configmaps-0c615fe5-ab22-41ac-b0b4-2a38734138ce" satisfied condition "Succeeded or Failed" Oct 23 01:28:39.777: INFO: Trying to get logs from node node2 pod pod-configmaps-0c615fe5-ab22-41ac-b0b4-2a38734138ce container env-test: STEP: delete the pod Oct 23 01:28:39.793: INFO: Waiting for pod pod-configmaps-0c615fe5-ab22-41ac-b0b4-2a38734138ce to disappear Oct 23 01:28:39.795: INFO: Pod pod-configmaps-0c615fe5-ab22-41ac-b0b4-2a38734138ce no longer exists [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:28:39.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2981" for this suite. • [SLOW TEST:6.083 seconds] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":53,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:28:39.828: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:28:39.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-5905" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Lease lease API should be available [Conformance]","total":-1,"completed":3,"skipped":62,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:28:39.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0666 on tmpfs Oct 23 01:28:39.943: INFO: Waiting up to 5m0s for pod "pod-246dcb5c-c734-4ea3-91c4-9d2d34c55898" in namespace "emptydir-898" to be "Succeeded or Failed" Oct 23 01:28:39.945: INFO: Pod "pod-246dcb5c-c734-4ea3-91c4-9d2d34c55898": Phase="Pending", Reason="", readiness=false. Elapsed: 2.58601ms Oct 23 01:28:41.949: INFO: Pod "pod-246dcb5c-c734-4ea3-91c4-9d2d34c55898": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006192595s Oct 23 01:28:43.953: INFO: Pod "pod-246dcb5c-c734-4ea3-91c4-9d2d34c55898": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010209647s STEP: Saw pod success Oct 23 01:28:43.953: INFO: Pod "pod-246dcb5c-c734-4ea3-91c4-9d2d34c55898" satisfied condition "Succeeded or Failed" Oct 23 01:28:43.955: INFO: Trying to get logs from node node1 pod pod-246dcb5c-c734-4ea3-91c4-9d2d34c55898 container test-container: STEP: delete the pod Oct 23 01:28:43.967: INFO: Waiting for pod pod-246dcb5c-c734-4ea3-91c4-9d2d34c55898 to disappear Oct 23 01:28:43.968: INFO: Pod pod-246dcb5c-c734-4ea3-91c4-9d2d34c55898 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:28:43.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-898" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":68,"failed":0} SSS ------------------------------ [BeforeEach] [sig-node] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:28:29.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Setting up the test STEP: Creating hostNetwork=false pod Oct 23 01:28:29.879: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:28:31.884: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:28:33.883: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:28:35.885: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:28:37.882: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:28:39.882: INFO: The status of Pod test-pod is Running (Ready = true) STEP: Creating hostNetwork=true pod Oct 23 01:28:39.898: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:28:41.901: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:28:43.902: INFO: The status of Pod test-host-network-pod is Running (Ready = true) STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Oct 23 01:28:43.903: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4156 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 01:28:43.904: INFO: >>> kubeConfig: /root/.kube/config Oct 23 01:28:43.987: INFO: Exec stderr: "" Oct 23 01:28:43.988: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4156 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 01:28:43.988: INFO: >>> kubeConfig: /root/.kube/config Oct 23 01:28:44.077: INFO: Exec stderr: "" Oct 23 01:28:44.077: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4156 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 01:28:44.077: INFO: >>> kubeConfig: /root/.kube/config Oct 23 01:28:44.158: INFO: Exec stderr: "" Oct 23 01:28:44.158: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4156 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 01:28:44.158: INFO: >>> kubeConfig: /root/.kube/config Oct 23 01:28:44.245: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Oct 23 01:28:44.245: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4156 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 01:28:44.245: INFO: >>> kubeConfig: /root/.kube/config Oct 23 01:28:44.326: INFO: Exec stderr: "" Oct 23 01:28:44.326: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4156 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 01:28:44.327: INFO: >>> kubeConfig: /root/.kube/config Oct 23 01:28:44.405: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Oct 23 01:28:44.405: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4156 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 01:28:44.405: INFO: >>> kubeConfig: /root/.kube/config Oct 23 01:28:44.483: INFO: Exec stderr: "" Oct 23 01:28:44.483: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4156 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 01:28:44.483: INFO: >>> kubeConfig: /root/.kube/config Oct 23 01:28:44.584: INFO: Exec stderr: "" Oct 23 01:28:44.584: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4156 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 01:28:44.584: INFO: >>> kubeConfig: /root/.kube/config Oct 23 01:28:44.673: INFO: Exec stderr: "" Oct 23 01:28:44.673: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4156 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 01:28:44.673: INFO: >>> kubeConfig: /root/.kube/config Oct 23 01:28:44.751: INFO: Exec stderr: "" [AfterEach] [sig-node] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:28:44.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-4156" for this suite. • [SLOW TEST:14.914 seconds] [sig-node] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":21,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:28:19.626: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi W1023 01:28:19.649655 31 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 23 01:28:19.649: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 23 01:28:19.651: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Oct 23 01:28:19.655: INFO: >>> kubeConfig: /root/.kube/config Oct 23 01:28:27.666: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:28:45.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8223" for this suite. • [SLOW TEST:25.741 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:28:31.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 23 01:28:31.788: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 23 01:28:33.801: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549311, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549311, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549311, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549311, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 23 01:28:35.806: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549311, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549311, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549311, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549311, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 23 01:28:38.813: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 01:28:38.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:28:46.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9952" for this suite. STEP: Destroying namespace "webhook-9952-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.726 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":-1,"completed":2,"skipped":15,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:28:47.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W1023 01:28:47.107400 32 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should support CronJob API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a cronjob STEP: creating STEP: getting STEP: listing STEP: watching Oct 23 01:28:47.114: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Oct 23 01:28:47.116: INFO: starting watch STEP: patching STEP: updating Oct 23 01:28:47.126: INFO: waiting for watch events with expected annotations Oct 23 01:28:47.126: INFO: saw patched and updated annotations STEP: patching /status STEP: updating /status STEP: get /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:28:47.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-7457" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] CronJob should support CronJob API operations [Conformance]","total":-1,"completed":3,"skipped":76,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:28:47.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should have Endpoints and EndpointSlices pointing to API Server [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:28:47.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-8156" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","total":-1,"completed":4,"skipped":116,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:28:47.298: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename certificates STEP: Waiting for a default service account to be provisioned in namespace [It] should support CSR API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/certificates.k8s.io STEP: getting /apis/certificates.k8s.io/v1 STEP: creating STEP: getting STEP: listing STEP: watching Oct 23 01:28:47.959: INFO: starting watch STEP: patching STEP: updating Oct 23 01:28:47.966: INFO: waiting for watch events with expected annotations Oct 23 01:28:47.966: INFO: saw patched and updated annotations STEP: getting /approval STEP: patching /approval STEP: updating /approval STEP: getting /status STEP: patching /status STEP: updating /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:28:48.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "certificates-8629" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":-1,"completed":5,"skipped":123,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:28:43.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-map-af85db25-65a2-475e-b202-953e6b1f024e STEP: Creating a pod to test consume configMaps Oct 23 01:28:44.024: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4aa4eadd-35fd-425b-98d1-77c0faec641c" in namespace "projected-7613" to be "Succeeded or Failed" Oct 23 01:28:44.026: INFO: Pod "pod-projected-configmaps-4aa4eadd-35fd-425b-98d1-77c0faec641c": Phase="Pending", Reason="", readiness=false. Elapsed: 1.911381ms Oct 23 01:28:46.029: INFO: Pod "pod-projected-configmaps-4aa4eadd-35fd-425b-98d1-77c0faec641c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005138768s Oct 23 01:28:48.032: INFO: Pod "pod-projected-configmaps-4aa4eadd-35fd-425b-98d1-77c0faec641c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008126123s STEP: Saw pod success Oct 23 01:28:48.032: INFO: Pod "pod-projected-configmaps-4aa4eadd-35fd-425b-98d1-77c0faec641c" satisfied condition "Succeeded or Failed" Oct 23 01:28:48.034: INFO: Trying to get logs from node node1 pod pod-projected-configmaps-4aa4eadd-35fd-425b-98d1-77c0faec641c container agnhost-container: STEP: delete the pod Oct 23 01:28:48.166: INFO: Waiting for pod pod-projected-configmaps-4aa4eadd-35fd-425b-98d1-77c0faec641c to disappear Oct 23 01:28:48.168: INFO: Pod pod-projected-configmaps-4aa4eadd-35fd-425b-98d1-77c0faec641c no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:28:48.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7613" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":71,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:28:29.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service multi-endpoint-test in namespace services-6821 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6821 to expose endpoints map[] Oct 23 01:28:29.846: INFO: Failed go get Endpoints object: endpoints "multi-endpoint-test" not found Oct 23 01:28:30.852: INFO: successfully validated that service multi-endpoint-test in namespace services-6821 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-6821 Oct 23 01:28:30.866: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:28:32.870: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:28:34.870: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:28:36.871: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:28:38.869: INFO: The status of Pod pod1 is Running (Ready = true) STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6821 to expose endpoints map[pod1:[100]] Oct 23 01:28:38.878: INFO: successfully validated that service multi-endpoint-test in namespace services-6821 exposes endpoints map[pod1:[100]] STEP: Creating pod pod2 in namespace services-6821 Oct 23 01:28:38.891: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:28:40.896: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:28:42.895: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:28:44.895: INFO: The status of Pod pod2 is Running (Ready = true) STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6821 to expose endpoints map[pod1:[100] pod2:[101]] Oct 23 01:28:47.923: INFO: successfully validated that service multi-endpoint-test in namespace services-6821 exposes endpoints map[pod1:[100] pod2:[101]] STEP: Deleting pod pod1 in namespace services-6821 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6821 to expose endpoints map[pod2:[101]] Oct 23 01:28:50.940: INFO: successfully validated that service multi-endpoint-test in namespace services-6821 exposes endpoints map[pod2:[101]] STEP: Deleting pod pod2 in namespace services-6821 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6821 to expose endpoints map[] Oct 23 01:28:50.952: INFO: successfully validated that service multi-endpoint-test in namespace services-6821 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:28:50.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6821" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:21.152 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":-1,"completed":2,"skipped":13,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:28:23.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:28:51.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7348" for this suite. • [SLOW TEST:28.069 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":-1,"completed":2,"skipped":36,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:28:51.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test substitution in container's args Oct 23 01:28:51.930: INFO: Waiting up to 5m0s for pod "var-expansion-e8f87e8a-a2bc-42f2-8fa9-67babb6ad61b" in namespace "var-expansion-4343" to be "Succeeded or Failed" Oct 23 01:28:51.932: INFO: Pod "var-expansion-e8f87e8a-a2bc-42f2-8fa9-67babb6ad61b": Phase="Pending", Reason="", readiness=false. Elapsed: 1.868736ms Oct 23 01:28:53.935: INFO: Pod "var-expansion-e8f87e8a-a2bc-42f2-8fa9-67babb6ad61b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005320112s Oct 23 01:28:55.940: INFO: Pod "var-expansion-e8f87e8a-a2bc-42f2-8fa9-67babb6ad61b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009868961s STEP: Saw pod success Oct 23 01:28:55.940: INFO: Pod "var-expansion-e8f87e8a-a2bc-42f2-8fa9-67babb6ad61b" satisfied condition "Succeeded or Failed" Oct 23 01:28:55.942: INFO: Trying to get logs from node node2 pod var-expansion-e8f87e8a-a2bc-42f2-8fa9-67babb6ad61b container dapi-container: STEP: delete the pod Oct 23 01:28:55.953: INFO: Waiting for pod var-expansion-e8f87e8a-a2bc-42f2-8fa9-67babb6ad61b to disappear Oct 23 01:28:55.955: INFO: Pod var-expansion-e8f87e8a-a2bc-42f2-8fa9-67babb6ad61b no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:28:55.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4343" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":49,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:28:50.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4956.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-4956.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4956.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4956.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-4956.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-4956.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-4956.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-4956.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4956.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4956.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-4956.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4956.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-4956.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-4956.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-4956.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-4956.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-4956.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4956.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 23 01:28:57.031: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4956.svc.cluster.local from pod dns-4956/dns-test-7945cd4c-ff4b-4ce8-8306-82ddf4e250c2: the server could not find the requested resource (get pods dns-test-7945cd4c-ff4b-4ce8-8306-82ddf4e250c2) Oct 23 01:28:57.034: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4956.svc.cluster.local from pod dns-4956/dns-test-7945cd4c-ff4b-4ce8-8306-82ddf4e250c2: the server could not find the requested resource (get pods dns-test-7945cd4c-ff4b-4ce8-8306-82ddf4e250c2) Oct 23 01:28:57.036: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4956.svc.cluster.local from pod dns-4956/dns-test-7945cd4c-ff4b-4ce8-8306-82ddf4e250c2: the server could not find the requested resource (get pods dns-test-7945cd4c-ff4b-4ce8-8306-82ddf4e250c2) Oct 23 01:28:57.039: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4956.svc.cluster.local from pod dns-4956/dns-test-7945cd4c-ff4b-4ce8-8306-82ddf4e250c2: the server could not find the requested resource (get pods dns-test-7945cd4c-ff4b-4ce8-8306-82ddf4e250c2) Oct 23 01:28:57.047: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4956.svc.cluster.local from pod dns-4956/dns-test-7945cd4c-ff4b-4ce8-8306-82ddf4e250c2: the server could not find the requested resource (get pods dns-test-7945cd4c-ff4b-4ce8-8306-82ddf4e250c2) Oct 23 01:28:57.050: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4956.svc.cluster.local from pod dns-4956/dns-test-7945cd4c-ff4b-4ce8-8306-82ddf4e250c2: the server could not find the requested resource (get pods dns-test-7945cd4c-ff4b-4ce8-8306-82ddf4e250c2) Oct 23 01:28:57.053: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4956.svc.cluster.local from pod dns-4956/dns-test-7945cd4c-ff4b-4ce8-8306-82ddf4e250c2: the server could not find the requested resource (get pods dns-test-7945cd4c-ff4b-4ce8-8306-82ddf4e250c2) Oct 23 01:28:57.058: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4956.svc.cluster.local from pod dns-4956/dns-test-7945cd4c-ff4b-4ce8-8306-82ddf4e250c2: the server could not find the requested resource (get pods dns-test-7945cd4c-ff4b-4ce8-8306-82ddf4e250c2) Oct 23 01:28:57.071: INFO: Lookups using dns-4956/dns-test-7945cd4c-ff4b-4ce8-8306-82ddf4e250c2 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4956.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4956.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4956.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4956.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4956.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4956.svc.cluster.local jessie_udp@dns-test-service-2.dns-4956.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4956.svc.cluster.local] Oct 23 01:29:02.127: INFO: DNS probes using dns-4956/dns-test-7945cd4c-ff4b-4ce8-8306-82ddf4e250c2 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:29:02.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4956" for this suite. • [SLOW TEST:11.167 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":-1,"completed":3,"skipped":17,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:28:48.229: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 01:28:48.262: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-170d70cf-2cc3-4acd-b6a9-568b0f01b932" in namespace "security-context-test-6537" to be "Succeeded or Failed" Oct 23 01:28:48.264: INFO: Pod "busybox-privileged-false-170d70cf-2cc3-4acd-b6a9-568b0f01b932": Phase="Pending", Reason="", readiness=false. Elapsed: 1.779048ms Oct 23 01:28:50.268: INFO: Pod "busybox-privileged-false-170d70cf-2cc3-4acd-b6a9-568b0f01b932": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006458471s Oct 23 01:28:52.272: INFO: Pod "busybox-privileged-false-170d70cf-2cc3-4acd-b6a9-568b0f01b932": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0102555s Oct 23 01:28:54.277: INFO: Pod "busybox-privileged-false-170d70cf-2cc3-4acd-b6a9-568b0f01b932": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014951413s Oct 23 01:28:56.282: INFO: Pod "busybox-privileged-false-170d70cf-2cc3-4acd-b6a9-568b0f01b932": Phase="Pending", Reason="", readiness=false. Elapsed: 8.019783771s Oct 23 01:28:58.286: INFO: Pod "busybox-privileged-false-170d70cf-2cc3-4acd-b6a9-568b0f01b932": Phase="Pending", Reason="", readiness=false. Elapsed: 10.024065666s Oct 23 01:29:00.290: INFO: Pod "busybox-privileged-false-170d70cf-2cc3-4acd-b6a9-568b0f01b932": Phase="Pending", Reason="", readiness=false. Elapsed: 12.028243154s Oct 23 01:29:02.294: INFO: Pod "busybox-privileged-false-170d70cf-2cc3-4acd-b6a9-568b0f01b932": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.031972533s Oct 23 01:29:02.294: INFO: Pod "busybox-privileged-false-170d70cf-2cc3-4acd-b6a9-568b0f01b932" satisfied condition "Succeeded or Failed" Oct 23 01:29:02.304: INFO: Got logs for pod "busybox-privileged-false-170d70cf-2cc3-4acd-b6a9-568b0f01b932": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:29:02.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6537" for this suite. • [SLOW TEST:14.085 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a pod with privileged /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:232 should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":95,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:28:30.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Performing setup for networking test in namespace pod-network-test-5017 STEP: creating a selector STEP: Creating the service pods in kubernetes Oct 23 01:28:30.029: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Oct 23 01:28:30.059: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:28:32.063: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:28:34.062: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 23 01:28:36.064: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 23 01:28:38.063: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 23 01:28:40.063: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 23 01:28:42.064: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 23 01:28:44.062: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 23 01:28:46.063: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 23 01:28:48.062: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 23 01:28:50.062: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 23 01:28:52.063: INFO: The status of Pod netserver-0 is Running (Ready = true) Oct 23 01:28:52.068: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Oct 23 01:29:02.101: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Oct 23 01:29:02.101: INFO: Going to poll 10.244.3.172 on port 8081 at least 0 times, with a maximum of 34 tries before failing Oct 23 01:29:02.104: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.3.172 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5017 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 01:29:02.104: INFO: >>> kubeConfig: /root/.kube/config Oct 23 01:29:03.291: INFO: Found all 1 expected endpoints: [netserver-0] Oct 23 01:29:03.291: INFO: Going to poll 10.244.4.246 on port 8081 at least 0 times, with a maximum of 34 tries before failing Oct 23 01:29:03.294: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.4.246 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5017 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 01:29:03.295: INFO: >>> kubeConfig: /root/.kube/config Oct 23 01:29:04.418: INFO: Found all 1 expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:29:04.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5017" for this suite. • [SLOW TEST:34.419 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":65,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:28:38.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Performing setup for networking test in namespace pod-network-test-5443 STEP: creating a selector STEP: Creating the service pods in kubernetes Oct 23 01:28:38.488: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Oct 23 01:28:38.517: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:28:40.519: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:28:42.521: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 23 01:28:44.520: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 23 01:28:46.521: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 23 01:28:48.522: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 23 01:28:50.521: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 23 01:28:52.522: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 23 01:28:54.521: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 23 01:28:56.520: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 23 01:28:58.521: INFO: The status of Pod netserver-0 is Running (Ready = true) Oct 23 01:28:58.526: INFO: The status of Pod netserver-1 is Running (Ready = false) Oct 23 01:29:00.530: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Oct 23 01:29:04.568: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Oct 23 01:29:04.568: INFO: Going to poll 10.244.3.175 on port 8080 at least 0 times, with a maximum of 34 tries before failing Oct 23 01:29:04.571: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.3.175:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5443 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 01:29:04.571: INFO: >>> kubeConfig: /root/.kube/config Oct 23 01:29:05.034: INFO: Found all 1 expected endpoints: [netserver-0] Oct 23 01:29:05.034: INFO: Going to poll 10.244.4.251 on port 8080 at least 0 times, with a maximum of 34 tries before failing Oct 23 01:29:05.037: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.4.251:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5443 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 01:29:05.037: INFO: >>> kubeConfig: /root/.kube/config Oct 23 01:29:05.139: INFO: Found all 1 expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:29:05.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5443" for this suite. • [SLOW TEST:26.685 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:29:02.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-021f8058-477b-40ea-b7e3-b5aca79103cf STEP: Creating a pod to test consume secrets Oct 23 01:29:02.399: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a940ef2f-b64b-4265-a712-1c9d66d82e93" in namespace "projected-1231" to be "Succeeded or Failed" Oct 23 01:29:02.402: INFO: Pod "pod-projected-secrets-a940ef2f-b64b-4265-a712-1c9d66d82e93": Phase="Pending", Reason="", readiness=false. Elapsed: 2.908098ms Oct 23 01:29:04.405: INFO: Pod "pod-projected-secrets-a940ef2f-b64b-4265-a712-1c9d66d82e93": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005994797s Oct 23 01:29:06.409: INFO: Pod "pod-projected-secrets-a940ef2f-b64b-4265-a712-1c9d66d82e93": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010536545s STEP: Saw pod success Oct 23 01:29:06.409: INFO: Pod "pod-projected-secrets-a940ef2f-b64b-4265-a712-1c9d66d82e93" satisfied condition "Succeeded or Failed" Oct 23 01:29:06.412: INFO: Trying to get logs from node node1 pod pod-projected-secrets-a940ef2f-b64b-4265-a712-1c9d66d82e93 container projected-secret-volume-test: STEP: delete the pod Oct 23 01:29:06.425: INFO: Waiting for pod pod-projected-secrets-a940ef2f-b64b-4265-a712-1c9d66d82e93 to disappear Oct 23 01:29:06.427: INFO: Pod pod-projected-secrets-a940ef2f-b64b-4265-a712-1c9d66d82e93 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:29:06.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1231" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":118,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:29:06.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename server-version STEP: Waiting for a default service account to be provisioned in namespace [It] should find the server version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Request ServerVersion STEP: Confirm major version Oct 23 01:29:06.590: INFO: Major version: 1 STEP: Confirm minor version Oct 23 01:29:06.590: INFO: cleanMinorVersion: 21 Oct 23 01:29:06.590: INFO: Minor version: 21 [AfterEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:29:06.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "server-version-7542" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":-1,"completed":8,"skipped":180,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:28:56.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6101 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6101;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6101 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6101;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6101.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6101.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6101.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6101.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6101.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-6101.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6101.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-6101.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6101.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-6101.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6101.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-6101.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6101.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 16.7.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.7.16_udp@PTR;check="$$(dig +tcp +noall +answer +search 16.7.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.7.16_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6101 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6101;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6101 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6101;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6101.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6101.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6101.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6101.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6101.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-6101.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6101.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-6101.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6101.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-6101.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6101.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-6101.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6101.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 16.7.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.7.16_udp@PTR;check="$$(dig +tcp +noall +answer +search 16.7.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.7.16_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 23 01:29:02.060: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6101/dns-test-c87d914c-e014-4bd9-a9e8-b64724cb02b3: the server could not find the requested resource (get pods dns-test-c87d914c-e014-4bd9-a9e8-b64724cb02b3) Oct 23 01:29:02.065: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6101/dns-test-c87d914c-e014-4bd9-a9e8-b64724cb02b3: the server could not find the requested resource (get pods dns-test-c87d914c-e014-4bd9-a9e8-b64724cb02b3) Oct 23 01:29:02.072: INFO: Unable to read wheezy_udp@dns-test-service.dns-6101 from pod dns-6101/dns-test-c87d914c-e014-4bd9-a9e8-b64724cb02b3: the server could not find the requested resource (get pods dns-test-c87d914c-e014-4bd9-a9e8-b64724cb02b3) Oct 23 01:29:02.074: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6101 from pod dns-6101/dns-test-c87d914c-e014-4bd9-a9e8-b64724cb02b3: the server could not find the requested resource (get pods dns-test-c87d914c-e014-4bd9-a9e8-b64724cb02b3) Oct 23 01:29:02.079: INFO: Unable to read wheezy_udp@dns-test-service.dns-6101.svc from pod dns-6101/dns-test-c87d914c-e014-4bd9-a9e8-b64724cb02b3: the server could not find the requested resource (get pods dns-test-c87d914c-e014-4bd9-a9e8-b64724cb02b3) Oct 23 01:29:02.082: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6101.svc from pod dns-6101/dns-test-c87d914c-e014-4bd9-a9e8-b64724cb02b3: the server could not find the requested resource (get pods dns-test-c87d914c-e014-4bd9-a9e8-b64724cb02b3) Oct 23 01:29:02.087: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6101.svc from pod dns-6101/dns-test-c87d914c-e014-4bd9-a9e8-b64724cb02b3: the server could not find the requested resource (get pods dns-test-c87d914c-e014-4bd9-a9e8-b64724cb02b3) Oct 23 01:29:02.089: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6101.svc from pod dns-6101/dns-test-c87d914c-e014-4bd9-a9e8-b64724cb02b3: the server could not find the requested resource (get pods dns-test-c87d914c-e014-4bd9-a9e8-b64724cb02b3) Oct 23 01:29:02.124: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6101/dns-test-c87d914c-e014-4bd9-a9e8-b64724cb02b3: the server could not find the requested resource (get pods dns-test-c87d914c-e014-4bd9-a9e8-b64724cb02b3) Oct 23 01:29:02.128: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6101/dns-test-c87d914c-e014-4bd9-a9e8-b64724cb02b3: the server could not find the requested resource (get pods dns-test-c87d914c-e014-4bd9-a9e8-b64724cb02b3) Oct 23 01:29:02.134: INFO: Unable to read jessie_udp@dns-test-service.dns-6101 from pod dns-6101/dns-test-c87d914c-e014-4bd9-a9e8-b64724cb02b3: the server could not find the requested resource (get pods dns-test-c87d914c-e014-4bd9-a9e8-b64724cb02b3) Oct 23 01:29:02.136: INFO: Unable to read jessie_tcp@dns-test-service.dns-6101 from pod dns-6101/dns-test-c87d914c-e014-4bd9-a9e8-b64724cb02b3: the server could not find the requested resource (get pods dns-test-c87d914c-e014-4bd9-a9e8-b64724cb02b3) Oct 23 01:29:02.142: INFO: Unable to read jessie_udp@dns-test-service.dns-6101.svc from pod dns-6101/dns-test-c87d914c-e014-4bd9-a9e8-b64724cb02b3: the server could not find the requested resource (get pods dns-test-c87d914c-e014-4bd9-a9e8-b64724cb02b3) Oct 23 01:29:02.149: INFO: Unable to read jessie_tcp@dns-test-service.dns-6101.svc from pod dns-6101/dns-test-c87d914c-e014-4bd9-a9e8-b64724cb02b3: the server could not find the requested resource (get pods dns-test-c87d914c-e014-4bd9-a9e8-b64724cb02b3) Oct 23 01:29:02.151: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6101.svc from pod dns-6101/dns-test-c87d914c-e014-4bd9-a9e8-b64724cb02b3: the server could not find the requested resource (get pods dns-test-c87d914c-e014-4bd9-a9e8-b64724cb02b3) Oct 23 01:29:02.153: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6101.svc from pod dns-6101/dns-test-c87d914c-e014-4bd9-a9e8-b64724cb02b3: the server could not find the requested resource (get pods dns-test-c87d914c-e014-4bd9-a9e8-b64724cb02b3) Oct 23 01:29:02.167: INFO: Lookups using dns-6101/dns-test-c87d914c-e014-4bd9-a9e8-b64724cb02b3 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6101 wheezy_tcp@dns-test-service.dns-6101 wheezy_udp@dns-test-service.dns-6101.svc wheezy_tcp@dns-test-service.dns-6101.svc wheezy_udp@_http._tcp.dns-test-service.dns-6101.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6101.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6101 jessie_tcp@dns-test-service.dns-6101 jessie_udp@dns-test-service.dns-6101.svc jessie_tcp@dns-test-service.dns-6101.svc jessie_udp@_http._tcp.dns-test-service.dns-6101.svc jessie_tcp@_http._tcp.dns-test-service.dns-6101.svc] Oct 23 01:29:07.250: INFO: DNS probes using dns-6101/dns-test-c87d914c-e014-4bd9-a9e8-b64724cb02b3 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:29:07.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6101" for this suite. • [SLOW TEST:11.279 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":-1,"completed":4,"skipped":67,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:28:45.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-configmap-h4wn STEP: Creating a pod to test atomic-volume-subpath Oct 23 01:28:45.423: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-h4wn" in namespace "subpath-9823" to be "Succeeded or Failed" Oct 23 01:28:45.428: INFO: Pod "pod-subpath-test-configmap-h4wn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.920768ms Oct 23 01:28:47.431: INFO: Pod "pod-subpath-test-configmap-h4wn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00826992s Oct 23 01:28:49.434: INFO: Pod "pod-subpath-test-configmap-h4wn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011771512s Oct 23 01:28:51.438: INFO: Pod "pod-subpath-test-configmap-h4wn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015756197s Oct 23 01:28:53.443: INFO: Pod "pod-subpath-test-configmap-h4wn": Phase="Pending", Reason="", readiness=false. Elapsed: 8.02002552s Oct 23 01:28:55.448: INFO: Pod "pod-subpath-test-configmap-h4wn": Phase="Pending", Reason="", readiness=false. Elapsed: 10.02489579s Oct 23 01:28:57.452: INFO: Pod "pod-subpath-test-configmap-h4wn": Phase="Running", Reason="", readiness=true. Elapsed: 12.02948637s Oct 23 01:28:59.456: INFO: Pod "pod-subpath-test-configmap-h4wn": Phase="Running", Reason="", readiness=true. Elapsed: 14.033855598s Oct 23 01:29:01.461: INFO: Pod "pod-subpath-test-configmap-h4wn": Phase="Running", Reason="", readiness=true. Elapsed: 16.038674842s Oct 23 01:29:03.464: INFO: Pod "pod-subpath-test-configmap-h4wn": Phase="Running", Reason="", readiness=true. Elapsed: 18.041589842s Oct 23 01:29:05.468: INFO: Pod "pod-subpath-test-configmap-h4wn": Phase="Running", Reason="", readiness=true. Elapsed: 20.045081387s Oct 23 01:29:07.472: INFO: Pod "pod-subpath-test-configmap-h4wn": Phase="Running", Reason="", readiness=true. Elapsed: 22.049288701s Oct 23 01:29:09.475: INFO: Pod "pod-subpath-test-configmap-h4wn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.052279427s STEP: Saw pod success Oct 23 01:29:09.475: INFO: Pod "pod-subpath-test-configmap-h4wn" satisfied condition "Succeeded or Failed" Oct 23 01:29:09.478: INFO: Trying to get logs from node node1 pod pod-subpath-test-configmap-h4wn container test-container-subpath-configmap-h4wn: STEP: delete the pod Oct 23 01:29:09.491: INFO: Waiting for pod pod-subpath-test-configmap-h4wn to disappear Oct 23 01:29:09.493: INFO: Pod pod-subpath-test-configmap-h4wn no longer exists STEP: Deleting pod pod-subpath-test-configmap-h4wn Oct 23 01:29:09.493: INFO: Deleting pod "pod-subpath-test-configmap-h4wn" in namespace "subpath-9823" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:29:09.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9823" for this suite. • [SLOW TEST:24.122 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":-1,"completed":2,"skipped":8,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":95,"failed":0} [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:29:05.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 23 01:29:05.959: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 23 01:29:07.966: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549345, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549345, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549345, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549345, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 23 01:29:10.976: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the mutating pod webhook via the AdmissionRegistration API Oct 23 01:29:10.990: INFO: Waiting for webhook configuration to be ready... STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:29:11.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9781" for this suite. STEP: Destroying namespace "webhook-9781-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.027 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":-1,"completed":7,"skipped":95,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:29:02.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name s-test-opt-del-55426a09-ea93-43bf-8845-741020d70191 STEP: Creating secret with name s-test-opt-upd-8dda3935-ed78-4085-bc23-906a3049141a STEP: Creating the pod Oct 23 01:29:02.252: INFO: The status of Pod pod-secrets-63daf02f-543a-4f86-bbe7-87fdc0a0aa36 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:29:04.257: INFO: The status of Pod pod-secrets-63daf02f-543a-4f86-bbe7-87fdc0a0aa36 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:29:06.256: INFO: The status of Pod pod-secrets-63daf02f-543a-4f86-bbe7-87fdc0a0aa36 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:29:08.255: INFO: The status of Pod pod-secrets-63daf02f-543a-4f86-bbe7-87fdc0a0aa36 is Running (Ready = true) STEP: Deleting secret s-test-opt-del-55426a09-ea93-43bf-8845-741020d70191 STEP: Updating secret s-test-opt-upd-8dda3935-ed78-4085-bc23-906a3049141a STEP: Creating secret with name s-test-opt-create-03641159-1535-4f46-9a8a-a8b6990b654a STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:29:12.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9756" for this suite. • [SLOW TEST:10.197 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":39,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:29:12.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:29:12.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3424" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":53,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:29:09.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] Deployment should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 01:29:09.638: INFO: Creating simple deployment test-new-deployment Oct 23 01:29:09.647: INFO: deployment "test-new-deployment" doesn't have the required revision set Oct 23 01:29:11.654: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549349, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549349, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549349, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549349, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-847dcfb7fb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 23 01:29:13.659: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549349, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549349, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549349, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549349, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-847dcfb7fb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the deployment Spec.Replicas was modified STEP: Patch a scale subresource [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Oct 23 01:29:15.682: INFO: Deployment "test-new-deployment": &Deployment{ObjectMeta:{test-new-deployment deployment-6191 ca2f8c63-af99-4a43-9b5e-ca300ddaf246 87668 3 2021-10-23 01:29:09 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2021-10-23 01:29:09 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-10-23 01:29:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005ce7818 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-10-23 01:29:14 +0000 UTC,LastTransitionTime:2021-10-23 01:29:14 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-new-deployment-847dcfb7fb" has successfully progressed.,LastUpdateTime:2021-10-23 01:29:14 +0000 UTC,LastTransitionTime:2021-10-23 01:29:09 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Oct 23 01:29:15.685: INFO: New ReplicaSet "test-new-deployment-847dcfb7fb" of Deployment "test-new-deployment": &ReplicaSet{ObjectMeta:{test-new-deployment-847dcfb7fb deployment-6191 6fbdcdf1-a102-4ccc-80ba-496c4bf21f09 87671 3 2021-10-23 01:29:09 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[deployment.kubernetes.io/desired-replicas:4 deployment.kubernetes.io/max-replicas:5 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-new-deployment ca2f8c63-af99-4a43-9b5e-ca300ddaf246 0xc005d5e067 0xc005d5e068}] [] [{kube-controller-manager Update apps/v1 2021-10-23 01:29:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ca2f8c63-af99-4a43-9b5e-ca300ddaf246\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 847dcfb7fb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005d5e108 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Oct 23 01:29:15.688: INFO: Pod "test-new-deployment-847dcfb7fb-kvl4n" is not available: &Pod{ObjectMeta:{test-new-deployment-847dcfb7fb-kvl4n test-new-deployment-847dcfb7fb- deployment-6191 e60f543f-0f3d-4a74-b84e-038eae5e81b5 87672 0 2021-10-23 01:29:15 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-new-deployment-847dcfb7fb 6fbdcdf1-a102-4ccc-80ba-496c4bf21f09 0xc005d5e70f 0xc005d5e740}] [] [{kube-controller-manager Update v1 2021-10-23 01:29:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6fbdcdf1-a102-4ccc-80ba-496c4bf21f09\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-8lsmn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8lsmn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:29:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 23 01:29:15.689: INFO: Pod "test-new-deployment-847dcfb7fb-pgpjf" is available: &Pod{ObjectMeta:{test-new-deployment-847dcfb7fb-pgpjf test-new-deployment-847dcfb7fb- deployment-6191 d1068761-e4fe-49e8-a5c6-445f8cee6cab 87639 0 2021-10-23 01:29:09 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.10" ], "mac": "fa:79:40:e3:fe:f0", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.10" ], "mac": "fa:79:40:e3:fe:f0", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-new-deployment-847dcfb7fb 6fbdcdf1-a102-4ccc-80ba-496c4bf21f09 0xc005d5e93f 0xc005d5e960}] [] [{kube-controller-manager Update v1 2021-10-23 01:29:09 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6fbdcdf1-a102-4ccc-80ba-496c4bf21f09\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-10-23 01:29:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-10-23 01:29:14 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.10\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-255f2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-255f2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:29:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:29:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:29:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:29:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.10,StartTime:2021-10-23 01:29:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-23 01:29:13 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://726d76aa06bb94c0d68ac8eb297b4c49b9b4f07f1c8b3527f5e31d4c8e79839f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.10,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:29:15.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6191" for this suite. • [SLOW TEST:6.080 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Deployment should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]","total":-1,"completed":3,"skipped":63,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:29:12.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Oct 23 01:29:12.532: INFO: Waiting up to 5m0s for pod "downwardapi-volume-abf487de-5ce7-423a-bf5d-0bec50f007a8" in namespace "projected-4149" to be "Succeeded or Failed" Oct 23 01:29:12.534: INFO: Pod "downwardapi-volume-abf487de-5ce7-423a-bf5d-0bec50f007a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.427986ms Oct 23 01:29:14.537: INFO: Pod "downwardapi-volume-abf487de-5ce7-423a-bf5d-0bec50f007a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005227172s Oct 23 01:29:16.541: INFO: Pod "downwardapi-volume-abf487de-5ce7-423a-bf5d-0bec50f007a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009254034s STEP: Saw pod success Oct 23 01:29:16.541: INFO: Pod "downwardapi-volume-abf487de-5ce7-423a-bf5d-0bec50f007a8" satisfied condition "Succeeded or Failed" Oct 23 01:29:16.544: INFO: Trying to get logs from node node2 pod downwardapi-volume-abf487de-5ce7-423a-bf5d-0bec50f007a8 container client-container: STEP: delete the pod Oct 23 01:29:16.557: INFO: Waiting for pod downwardapi-volume-abf487de-5ce7-423a-bf5d-0bec50f007a8 to disappear Oct 23 01:29:16.559: INFO: Pod downwardapi-volume-abf487de-5ce7-423a-bf5d-0bec50f007a8 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:29:16.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4149" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":57,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:29:06.669: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service endpoint-test2 in namespace services-3294 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3294 to expose endpoints map[] Oct 23 01:29:06.702: INFO: successfully validated that service endpoint-test2 in namespace services-3294 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-3294 Oct 23 01:29:06.717: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:29:08.720: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:29:10.721: INFO: The status of Pod pod1 is Running (Ready = true) STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3294 to expose endpoints map[pod1:[80]] Oct 23 01:29:10.732: INFO: successfully validated that service endpoint-test2 in namespace services-3294 exposes endpoints map[pod1:[80]] STEP: Creating pod pod2 in namespace services-3294 Oct 23 01:29:10.748: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:29:12.752: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:29:14.751: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:29:16.751: INFO: The status of Pod pod2 is Running (Ready = true) STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3294 to expose endpoints map[pod1:[80] pod2:[80]] Oct 23 01:29:16.762: INFO: successfully validated that service endpoint-test2 in namespace services-3294 exposes endpoints map[pod1:[80] pod2:[80]] STEP: Deleting pod pod1 in namespace services-3294 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3294 to expose endpoints map[pod2:[80]] Oct 23 01:29:16.775: INFO: successfully validated that service endpoint-test2 in namespace services-3294 exposes endpoints map[pod2:[80]] STEP: Deleting pod pod2 in namespace services-3294 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3294 to expose endpoints map[] Oct 23 01:29:16.786: INFO: successfully validated that service endpoint-test2 in namespace services-3294 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:29:16.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3294" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:10.131 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":-1,"completed":9,"skipped":213,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:28:19.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe W1023 01:28:19.645268 30 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 23 01:28:19.645: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 23 01:28:19.647: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:29:19.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2963" for this suite. • [SLOW TEST:60.052 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:29:19.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should run through a ConfigMap lifecycle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a ConfigMap STEP: fetching the ConfigMap STEP: patching the ConfigMap STEP: listing all ConfigMaps in all namespaces with a label selector STEP: deleting the ConfigMap by collection with a label selector STEP: listing all ConfigMaps in test namespace [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:29:19.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5906" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":-1,"completed":2,"skipped":11,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:29:19.808: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting the auto-created API token Oct 23 01:29:20.352: INFO: created pod pod-service-account-defaultsa Oct 23 01:29:20.352: INFO: pod pod-service-account-defaultsa service account token volume mount: true Oct 23 01:29:20.362: INFO: created pod pod-service-account-mountsa Oct 23 01:29:20.362: INFO: pod pod-service-account-mountsa service account token volume mount: true Oct 23 01:29:20.371: INFO: created pod pod-service-account-nomountsa Oct 23 01:29:20.371: INFO: pod pod-service-account-nomountsa service account token volume mount: false Oct 23 01:29:20.381: INFO: created pod pod-service-account-defaultsa-mountspec Oct 23 01:29:20.381: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Oct 23 01:29:20.390: INFO: created pod pod-service-account-mountsa-mountspec Oct 23 01:29:20.390: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Oct 23 01:29:20.401: INFO: created pod pod-service-account-nomountsa-mountspec Oct 23 01:29:20.401: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Oct 23 01:29:20.410: INFO: created pod pod-service-account-defaultsa-nomountspec Oct 23 01:29:20.410: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Oct 23 01:29:20.419: INFO: created pod pod-service-account-mountsa-nomountspec Oct 23 01:29:20.419: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Oct 23 01:29:20.429: INFO: created pod pod-service-account-nomountsa-nomountspec Oct 23 01:29:20.429: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:29:20.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-3245" for this suite. • ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:28:19.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc W1023 01:28:19.612935 29 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 23 01:28:19.613: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 23 01:28:19.616: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W1023 01:28:20.668484 29 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 01:29:22.684: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:29:22.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-815" for this suite. • [SLOW TEST:63.110 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:29:15.707: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir volume type on tmpfs Oct 23 01:29:15.742: INFO: Waiting up to 5m0s for pod "pod-fe6d9fc3-0967-4229-9fc2-f57dcb2ea3ba" in namespace "emptydir-8893" to be "Succeeded or Failed" Oct 23 01:29:15.746: INFO: Pod "pod-fe6d9fc3-0967-4229-9fc2-f57dcb2ea3ba": Phase="Pending", Reason="", readiness=false. Elapsed: 4.694999ms Oct 23 01:29:17.750: INFO: Pod "pod-fe6d9fc3-0967-4229-9fc2-f57dcb2ea3ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007832848s Oct 23 01:29:19.756: INFO: Pod "pod-fe6d9fc3-0967-4229-9fc2-f57dcb2ea3ba": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013800879s Oct 23 01:29:21.759: INFO: Pod "pod-fe6d9fc3-0967-4229-9fc2-f57dcb2ea3ba": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01752156s Oct 23 01:29:23.763: INFO: Pod "pod-fe6d9fc3-0967-4229-9fc2-f57dcb2ea3ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.021015946s STEP: Saw pod success Oct 23 01:29:23.763: INFO: Pod "pod-fe6d9fc3-0967-4229-9fc2-f57dcb2ea3ba" satisfied condition "Succeeded or Failed" Oct 23 01:29:23.765: INFO: Trying to get logs from node node2 pod pod-fe6d9fc3-0967-4229-9fc2-f57dcb2ea3ba container test-container: STEP: delete the pod Oct 23 01:29:23.789: INFO: Waiting for pod pod-fe6d9fc3-0967-4229-9fc2-f57dcb2ea3ba to disappear Oct 23 01:29:23.790: INFO: Pod pod-fe6d9fc3-0967-4229-9fc2-f57dcb2ea3ba no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:29:23.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8893" for this suite. • [SLOW TEST:8.093 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":65,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:29:23.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Oct 23 01:29:23.857: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-1562 e9719500-f568-4399-90bd-07dde13ee171 88001 0 2021-10-23 01:29:23 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-10-23 01:29:23 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Oct 23 01:29:23.857: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-1562 e9719500-f568-4399-90bd-07dde13ee171 88002 0 2021-10-23 01:29:23 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-10-23 01:29:23 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Oct 23 01:29:23.870: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-1562 e9719500-f568-4399-90bd-07dde13ee171 88003 0 2021-10-23 01:29:23 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-10-23 01:29:23 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Oct 23 01:29:23.870: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-1562 e9719500-f568-4399-90bd-07dde13ee171 88004 0 2021-10-23 01:29:23 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-10-23 01:29:23 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:29:23.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1562" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":-1,"completed":5,"skipped":75,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":-1,"completed":3,"skipped":21,"failed":0} [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:29:20.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if kubectl can dry-run update Pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Oct 23 01:29:20.462: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6038 run e2e-test-httpd-pod --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 --labels=run=e2e-test-httpd-pod' Oct 23 01:29:20.719: INFO: stderr: "" Oct 23 01:29:20.719: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: replace the image in the pod with server-side dry-run Oct 23 01:29:20.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6038 patch pod e2e-test-httpd-pod -p {"spec":{"containers":[{"name": "e2e-test-httpd-pod","image": "k8s.gcr.io/e2e-test-images/busybox:1.29-1"}]}} --dry-run=server' Oct 23 01:29:21.071: INFO: stderr: "" Oct 23 01:29:21.071: INFO: stdout: "pod/e2e-test-httpd-pod patched\n" STEP: verifying the pod e2e-test-httpd-pod has the right image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Oct 23 01:29:21.073: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6038 delete pods e2e-test-httpd-pod' Oct 23 01:29:24.501: INFO: stderr: "" Oct 23 01:29:24.501: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:29:24.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6038" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":-1,"completed":4,"skipped":21,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:29:16.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 01:29:16.627: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:29:24.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-367" for this suite. • [SLOW TEST:8.137 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 custom resource defaulting for requests and from storage works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":-1,"completed":7,"skipped":74,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:28:44.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Oct 23 01:28:44.858: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Oct 23 01:29:01.977: INFO: >>> kubeConfig: /root/.kube/config Oct 23 01:29:10.538: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:29:29.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5847" for this suite. • [SLOW TEST:44.836 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":-1,"completed":4,"skipped":59,"failed":0} S ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:29:23.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should run through the lifecycle of Pods and PodStatus [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Pod with a static label STEP: watching for Pod to be ready Oct 23 01:29:23.986: INFO: observed Pod pod-test in namespace pods-7854 in phase Pending with labels: map[test-pod-static:true] & conditions [] Oct 23 01:29:23.987: INFO: observed Pod pod-test in namespace pods-7854 in phase Pending with labels: map[test-pod-static:true] & conditions [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:29:23 +0000 UTC }] Oct 23 01:29:26.067: INFO: observed Pod pod-test in namespace pods-7854 in phase Pending with labels: map[test-pod-static:true] & conditions [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:29:23 +0000 UTC }] Oct 23 01:29:26.817: INFO: observed Pod pod-test in namespace pods-7854 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:29:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:29:23 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:29:23 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:29:23 +0000 UTC }] Oct 23 01:29:31.106: INFO: Found Pod pod-test in namespace pods-7854 in phase Running with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:29:23 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:29:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:29:27 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:29:23 +0000 UTC }] STEP: patching the Pod with a new Label and updated data Oct 23 01:29:31.117: INFO: observed event type ADDED STEP: getting the Pod and ensuring that it's patched STEP: getting the PodStatus STEP: replacing the Pod's status Ready condition to False STEP: check the Pod again to ensure its Ready conditions are False STEP: deleting the Pod via a Collection with a LabelSelector STEP: watching for the Pod to be deleted Oct 23 01:29:31.136: INFO: observed event type ADDED Oct 23 01:29:31.136: INFO: observed event type MODIFIED Oct 23 01:29:31.136: INFO: observed event type MODIFIED Oct 23 01:29:31.136: INFO: observed event type MODIFIED Oct 23 01:29:31.136: INFO: observed event type MODIFIED Oct 23 01:29:31.136: INFO: observed event type MODIFIED Oct 23 01:29:31.136: INFO: observed event type MODIFIED Oct 23 01:29:31.136: INFO: observed event type MODIFIED [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:29:31.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7854" for this suite. • [SLOW TEST:7.201 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should run through the lifecycle of Pods and PodStatus [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":-1,"completed":6,"skipped":102,"failed":0} SS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:29:04.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Performing setup for networking test in namespace pod-network-test-8919 STEP: creating a selector STEP: Creating the service pods in kubernetes Oct 23 01:29:04.483: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Oct 23 01:29:04.514: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:29:06.518: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:29:08.517: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:29:10.519: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 23 01:29:12.520: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 23 01:29:14.517: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 23 01:29:16.521: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 23 01:29:18.518: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 23 01:29:20.519: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 23 01:29:22.519: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 23 01:29:24.517: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 23 01:29:26.519: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 23 01:29:28.517: INFO: The status of Pod netserver-0 is Running (Ready = true) Oct 23 01:29:28.522: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Oct 23 01:29:36.543: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Oct 23 01:29:36.543: INFO: Breadth first check of 10.244.3.185 on host 10.10.190.207... Oct 23 01:29:36.546: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.18:9080/dial?request=hostname&protocol=http&host=10.244.3.185&port=8080&tries=1'] Namespace:pod-network-test-8919 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 01:29:36.546: INFO: >>> kubeConfig: /root/.kube/config Oct 23 01:29:36.659: INFO: Waiting for responses: map[] Oct 23 01:29:36.659: INFO: reached 10.244.3.185 after 0/1 tries Oct 23 01:29:36.659: INFO: Breadth first check of 10.244.4.7 on host 10.10.190.208... Oct 23 01:29:36.662: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.18:9080/dial?request=hostname&protocol=http&host=10.244.4.7&port=8080&tries=1'] Namespace:pod-network-test-8919 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 01:29:36.662: INFO: >>> kubeConfig: /root/.kube/config Oct 23 01:29:36.745: INFO: Waiting for responses: map[] Oct 23 01:29:36.745: INFO: reached 10.244.4.7 after 0/1 tries Oct 23 01:29:36.745: INFO: Going to retry 0 out of 2 pods.... [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:29:36.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8919" for this suite. • [SLOW TEST:32.298 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":78,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:29:31.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-6ef64590-3fe1-4d80-9796-051ba65db46f STEP: Creating a pod to test consume configMaps Oct 23 01:29:31.192: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-89945cd2-b3b3-4317-bbdf-d6212cf0c111" in namespace "projected-3453" to be "Succeeded or Failed" Oct 23 01:29:31.196: INFO: Pod "pod-projected-configmaps-89945cd2-b3b3-4317-bbdf-d6212cf0c111": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090389ms Oct 23 01:29:33.199: INFO: Pod "pod-projected-configmaps-89945cd2-b3b3-4317-bbdf-d6212cf0c111": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007011973s Oct 23 01:29:35.202: INFO: Pod "pod-projected-configmaps-89945cd2-b3b3-4317-bbdf-d6212cf0c111": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010487091s Oct 23 01:29:37.205: INFO: Pod "pod-projected-configmaps-89945cd2-b3b3-4317-bbdf-d6212cf0c111": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013606549s STEP: Saw pod success Oct 23 01:29:37.206: INFO: Pod "pod-projected-configmaps-89945cd2-b3b3-4317-bbdf-d6212cf0c111" satisfied condition "Succeeded or Failed" Oct 23 01:29:37.208: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-89945cd2-b3b3-4317-bbdf-d6212cf0c111 container agnhost-container: STEP: delete the pod Oct 23 01:29:37.223: INFO: Waiting for pod pod-projected-configmaps-89945cd2-b3b3-4317-bbdf-d6212cf0c111 to disappear Oct 23 01:29:37.225: INFO: Pod pod-projected-configmaps-89945cd2-b3b3-4317-bbdf-d6212cf0c111 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:29:37.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3453" for this suite. • [SLOW TEST:6.082 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":104,"failed":0} SSSSS ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0} [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:29:22.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: submitting the pod to kubernetes Oct 23 01:29:22.732: INFO: The status of Pod pod-update-activedeadlineseconds-62efcad2-7901-4644-96c3-beddf3965471 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:29:24.736: INFO: The status of Pod pod-update-activedeadlineseconds-62efcad2-7901-4644-96c3-beddf3965471 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:29:26.737: INFO: The status of Pod pod-update-activedeadlineseconds-62efcad2-7901-4644-96c3-beddf3965471 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:29:28.735: INFO: The status of Pod pod-update-activedeadlineseconds-62efcad2-7901-4644-96c3-beddf3965471 is Running (Ready = true) STEP: verifying the pod is in kubernetes STEP: updating the pod Oct 23 01:29:29.249: INFO: Successfully updated pod "pod-update-activedeadlineseconds-62efcad2-7901-4644-96c3-beddf3965471" Oct 23 01:29:29.249: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-62efcad2-7901-4644-96c3-beddf3965471" in namespace "pods-8975" to be "terminated due to deadline exceeded" Oct 23 01:29:29.254: INFO: Pod "pod-update-activedeadlineseconds-62efcad2-7901-4644-96c3-beddf3965471": Phase="Running", Reason="", readiness=true. Elapsed: 4.277229ms Oct 23 01:29:31.259: INFO: Pod "pod-update-activedeadlineseconds-62efcad2-7901-4644-96c3-beddf3965471": Phase="Running", Reason="", readiness=true. Elapsed: 2.009531826s Oct 23 01:29:33.261: INFO: Pod "pod-update-activedeadlineseconds-62efcad2-7901-4644-96c3-beddf3965471": Phase="Running", Reason="", readiness=true. Elapsed: 4.012056783s Oct 23 01:29:35.264: INFO: Pod "pod-update-activedeadlineseconds-62efcad2-7901-4644-96c3-beddf3965471": Phase="Running", Reason="", readiness=true. Elapsed: 6.01443128s Oct 23 01:29:37.267: INFO: Pod "pod-update-activedeadlineseconds-62efcad2-7901-4644-96c3-beddf3965471": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 8.018146238s Oct 23 01:29:37.267: INFO: Pod "pod-update-activedeadlineseconds-62efcad2-7901-4644-96c3-beddf3965471" satisfied condition "terminated due to deadline exceeded" [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:29:37.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8975" for this suite. • [SLOW TEST:14.580 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":2,"failed":0} SS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:29:29.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should add annotations for pods in rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating Agnhost RC Oct 23 01:29:29.697: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1942 create -f -' Oct 23 01:29:30.088: INFO: stderr: "" Oct 23 01:29:30.088: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Oct 23 01:29:31.092: INFO: Selector matched 1 pods for map[app:agnhost] Oct 23 01:29:31.092: INFO: Found 0 / 1 Oct 23 01:29:32.093: INFO: Selector matched 1 pods for map[app:agnhost] Oct 23 01:29:32.093: INFO: Found 0 / 1 Oct 23 01:29:33.092: INFO: Selector matched 1 pods for map[app:agnhost] Oct 23 01:29:33.092: INFO: Found 0 / 1 Oct 23 01:29:34.093: INFO: Selector matched 1 pods for map[app:agnhost] Oct 23 01:29:34.093: INFO: Found 0 / 1 Oct 23 01:29:35.094: INFO: Selector matched 1 pods for map[app:agnhost] Oct 23 01:29:35.094: INFO: Found 0 / 1 Oct 23 01:29:36.094: INFO: Selector matched 1 pods for map[app:agnhost] Oct 23 01:29:36.094: INFO: Found 0 / 1 Oct 23 01:29:37.092: INFO: Selector matched 1 pods for map[app:agnhost] Oct 23 01:29:37.092: INFO: Found 1 / 1 Oct 23 01:29:37.092: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Oct 23 01:29:37.095: INFO: Selector matched 1 pods for map[app:agnhost] Oct 23 01:29:37.095: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Oct 23 01:29:37.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1942 patch pod agnhost-primary-fz6pc -p {"metadata":{"annotations":{"x":"y"}}}' Oct 23 01:29:37.270: INFO: stderr: "" Oct 23 01:29:37.270: INFO: stdout: "pod/agnhost-primary-fz6pc patched\n" STEP: checking annotations Oct 23 01:29:37.274: INFO: Selector matched 1 pods for map[app:agnhost] Oct 23 01:29:37.274: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:29:37.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1942" for this suite. • [SLOW TEST:7.606 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1460 should add annotations for pods in rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":-1,"completed":5,"skipped":60,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:29:16.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 23 01:29:17.436: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 23 01:29:19.445: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549357, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549357, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549357, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549357, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 23 01:29:21.449: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549357, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549357, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549357, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549357, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 23 01:29:23.449: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549357, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549357, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549357, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549357, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 23 01:29:25.449: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549357, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549357, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549357, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549357, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 23 01:29:28.455: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:29:40.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-500" for this suite. STEP: Destroying namespace "webhook-500-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:23.756 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":-1,"completed":10,"skipped":223,"failed":0} [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:29:40.581: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod with one valid and two invalid sysctls [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:29:40.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-129" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":11,"skipped":223,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:29:37.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 01:29:37.325: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes Oct 23 01:29:37.342: INFO: The status of Pod pod-logs-websocket-123e947f-a093-40a0-bb4c-721f6a7d97d0 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:29:39.345: INFO: The status of Pod pod-logs-websocket-123e947f-a093-40a0-bb4c-721f6a7d97d0 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:29:41.346: INFO: The status of Pod pod-logs-websocket-123e947f-a093-40a0-bb4c-721f6a7d97d0 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:29:43.345: INFO: The status of Pod pod-logs-websocket-123e947f-a093-40a0-bb4c-721f6a7d97d0 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:29:45.347: INFO: The status of Pod pod-logs-websocket-123e947f-a093-40a0-bb4c-721f6a7d97d0 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:29:47.348: INFO: The status of Pod pod-logs-websocket-123e947f-a093-40a0-bb4c-721f6a7d97d0 is Running (Ready = true) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:29:47.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9341" for this suite. • [SLOW TEST:10.090 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":66,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:29:37.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-d56016f4-bc53-44e6-a858-6b653bf4d7db STEP: Creating a pod to test consume configMaps Oct 23 01:29:37.345: INFO: Waiting up to 5m0s for pod "pod-configmaps-f23f7937-758d-4195-9031-aef59f1999e3" in namespace "configmap-4332" to be "Succeeded or Failed" Oct 23 01:29:37.348: INFO: Pod "pod-configmaps-f23f7937-758d-4195-9031-aef59f1999e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.811474ms Oct 23 01:29:39.352: INFO: Pod "pod-configmaps-f23f7937-758d-4195-9031-aef59f1999e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006850948s Oct 23 01:29:41.357: INFO: Pod "pod-configmaps-f23f7937-758d-4195-9031-aef59f1999e3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011873981s Oct 23 01:29:43.361: INFO: Pod "pod-configmaps-f23f7937-758d-4195-9031-aef59f1999e3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015654645s Oct 23 01:29:45.365: INFO: Pod "pod-configmaps-f23f7937-758d-4195-9031-aef59f1999e3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.019814077s Oct 23 01:29:47.370: INFO: Pod "pod-configmaps-f23f7937-758d-4195-9031-aef59f1999e3": Phase="Pending", Reason="", readiness=false. Elapsed: 10.024760551s Oct 23 01:29:49.374: INFO: Pod "pod-configmaps-f23f7937-758d-4195-9031-aef59f1999e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.029461041s STEP: Saw pod success Oct 23 01:29:49.374: INFO: Pod "pod-configmaps-f23f7937-758d-4195-9031-aef59f1999e3" satisfied condition "Succeeded or Failed" Oct 23 01:29:49.377: INFO: Trying to get logs from node node2 pod pod-configmaps-f23f7937-758d-4195-9031-aef59f1999e3 container agnhost-container: STEP: delete the pod Oct 23 01:29:49.593: INFO: Waiting for pod pod-configmaps-f23f7937-758d-4195-9031-aef59f1999e3 to disappear Oct 23 01:29:49.595: INFO: Pod pod-configmaps-f23f7937-758d-4195-9031-aef59f1999e3 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:29:49.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4332" for this suite. • [SLOW TEST:12.295 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":14,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:29:36.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. Oct 23 01:29:36.803: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:29:38.808: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:29:40.810: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:29:42.808: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the pod with lifecycle hook Oct 23 01:29:42.823: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:29:44.827: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:29:46.828: INFO: The status of Pod pod-with-poststart-http-hook is Running (Ready = true) STEP: check poststart hook STEP: delete the pod with lifecycle hook Oct 23 01:29:46.842: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Oct 23 01:29:46.844: INFO: Pod pod-with-poststart-http-hook still exists Oct 23 01:29:48.845: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Oct 23 01:29:48.848: INFO: Pod pod-with-poststart-http-hook still exists Oct 23 01:29:50.845: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Oct 23 01:29:50.848: INFO: Pod pod-with-poststart-http-hook still exists Oct 23 01:29:52.845: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Oct 23 01:29:52.848: INFO: Pod pod-with-poststart-http-hook still exists Oct 23 01:29:54.845: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Oct 23 01:29:54.847: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:29:54.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4222" for this suite. • [SLOW TEST:18.091 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":81,"failed":0} S ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:29:54.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 01:29:54.887: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Oct 23 01:30:02.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-939 --namespace=crd-publish-openapi-939 create -f -' Oct 23 01:30:03.395: INFO: stderr: "" Oct 23 01:30:03.395: INFO: stdout: "e2e-test-crd-publish-openapi-4013-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Oct 23 01:30:03.395: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-939 --namespace=crd-publish-openapi-939 delete e2e-test-crd-publish-openapi-4013-crds test-cr' Oct 23 01:30:03.550: INFO: stderr: "" Oct 23 01:30:03.550: INFO: stdout: "e2e-test-crd-publish-openapi-4013-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Oct 23 01:30:03.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-939 --namespace=crd-publish-openapi-939 apply -f -' Oct 23 01:30:03.871: INFO: stderr: "" Oct 23 01:30:03.871: INFO: stdout: "e2e-test-crd-publish-openapi-4013-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Oct 23 01:30:03.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-939 --namespace=crd-publish-openapi-939 delete e2e-test-crd-publish-openapi-4013-crds test-cr' Oct 23 01:30:04.029: INFO: stderr: "" Oct 23 01:30:04.029: INFO: stdout: "e2e-test-crd-publish-openapi-4013-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Oct 23 01:30:04.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-939 explain e2e-test-crd-publish-openapi-4013-crds' Oct 23 01:30:04.341: INFO: stderr: "" Oct 23 01:30:04.341: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4013-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:30:07.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-939" for this suite. • [SLOW TEST:13.012 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":-1,"completed":6,"skipped":82,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:30:07.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars Oct 23 01:30:07.932: INFO: Waiting up to 5m0s for pod "downward-api-f238c118-7b1c-4e31-bb9e-5ee18a1e3b83" in namespace "downward-api-7496" to be "Succeeded or Failed" Oct 23 01:30:07.934: INFO: Pod "downward-api-f238c118-7b1c-4e31-bb9e-5ee18a1e3b83": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069308ms Oct 23 01:30:09.938: INFO: Pod "downward-api-f238c118-7b1c-4e31-bb9e-5ee18a1e3b83": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005875435s Oct 23 01:30:11.942: INFO: Pod "downward-api-f238c118-7b1c-4e31-bb9e-5ee18a1e3b83": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009750452s STEP: Saw pod success Oct 23 01:30:11.942: INFO: Pod "downward-api-f238c118-7b1c-4e31-bb9e-5ee18a1e3b83" satisfied condition "Succeeded or Failed" Oct 23 01:30:11.944: INFO: Trying to get logs from node node2 pod downward-api-f238c118-7b1c-4e31-bb9e-5ee18a1e3b83 container dapi-container: STEP: delete the pod Oct 23 01:30:11.959: INFO: Waiting for pod downward-api-f238c118-7b1c-4e31-bb9e-5ee18a1e3b83 to disappear Oct 23 01:30:11.961: INFO: Pod downward-api-f238c118-7b1c-4e31-bb9e-5ee18a1e3b83 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:30:11.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7496" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":93,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:29:49.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-zldp4 in namespace proxy-4650 I1023 01:29:49.648365 29 runners.go:190] Created replication controller with name: proxy-service-zldp4, namespace: proxy-4650, replica count: 1 I1023 01:29:50.699566 29 runners.go:190] proxy-service-zldp4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1023 01:29:51.700229 29 runners.go:190] proxy-service-zldp4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1023 01:29:52.700817 29 runners.go:190] proxy-service-zldp4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1023 01:29:53.703075 29 runners.go:190] proxy-service-zldp4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1023 01:29:54.703282 29 runners.go:190] proxy-service-zldp4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1023 01:29:55.703636 29 runners.go:190] proxy-service-zldp4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1023 01:29:56.704050 29 runners.go:190] proxy-service-zldp4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1023 01:29:57.704344 29 runners.go:190] proxy-service-zldp4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1023 01:29:58.704987 29 runners.go:190] proxy-service-zldp4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1023 01:29:59.705884 29 runners.go:190] proxy-service-zldp4 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 23 01:29:59.708: INFO: setup took 10.069740081s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Oct 23 01:29:59.711: INFO: (0) /api/v1/namespaces/proxy-4650/pods/proxy-service-zldp4-7ptft:160/proxy/: foo (200; 2.990139ms) Oct 23 01:29:59.711: INFO: (0) /api/v1/namespaces/proxy-4650/pods/proxy-service-zldp4-7ptft/proxy/: test (200; 3.18673ms) Oct 23 01:29:59.711: INFO: (0) /api/v1/namespaces/proxy-4650/pods/http:proxy-service-zldp4-7ptft:1080/proxy/: ... (200; 3.186781ms) Oct 23 01:29:59.711: INFO: (0) /api/v1/namespaces/proxy-4650/pods/proxy-service-zldp4-7ptft:1080/proxy/: test<... (200; 3.101251ms) Oct 23 01:29:59.714: INFO: (0) /api/v1/namespaces/proxy-4650/services/http:proxy-service-zldp4:portname1/proxy/: foo (200; 5.561437ms) Oct 23 01:29:59.714: INFO: (0) /api/v1/namespaces/proxy-4650/services/proxy-service-zldp4:portname1/proxy/: foo (200; 5.547453ms) Oct 23 01:29:59.714: INFO: (0) /api/v1/namespaces/proxy-4650/pods/http:proxy-service-zldp4-7ptft:160/proxy/: foo (200; 5.601036ms) Oct 23 01:29:59.718: INFO: (0) /api/v1/namespaces/proxy-4650/pods/http:proxy-service-zldp4-7ptft:162/proxy/: bar (200; 9.335859ms) Oct 23 01:29:59.718: INFO: (0) /api/v1/namespaces/proxy-4650/services/http:proxy-service-zldp4:portname2/proxy/: bar (200; 9.424923ms) Oct 23 01:29:59.718: INFO: (0) /api/v1/namespaces/proxy-4650/services/proxy-service-zldp4:portname2/proxy/: bar (200; 9.473483ms) Oct 23 01:29:59.721: INFO: (0) /api/v1/namespaces/proxy-4650/pods/proxy-service-zldp4-7ptft:162/proxy/: bar (200; 12.472053ms) Oct 23 01:29:59.721: INFO: (0) /api/v1/namespaces/proxy-4650/services/https:proxy-service-zldp4:tlsportname2/proxy/: tls qux (200; 13.161076ms) Oct 23 01:29:59.721: INFO: (0) /api/v1/namespaces/proxy-4650/pods/https:proxy-service-zldp4-7ptft:462/proxy/: tls qux (200; 13.181029ms) Oct 23 01:29:59.722: INFO: (0) /api/v1/namespaces/proxy-4650/pods/https:proxy-service-zldp4-7ptft:443/proxy/: ... (200; 3.090126ms) Oct 23 01:29:59.725: INFO: (1) /api/v1/namespaces/proxy-4650/pods/proxy-service-zldp4-7ptft/proxy/: test (200; 3.368532ms) Oct 23 01:29:59.725: INFO: (1) /api/v1/namespaces/proxy-4650/pods/proxy-service-zldp4-7ptft:1080/proxy/: test<... (200; 3.370575ms) Oct 23 01:29:59.725: INFO: (1) /api/v1/namespaces/proxy-4650/pods/https:proxy-service-zldp4-7ptft:443/proxy/: test<... (200; 2.571589ms) Oct 23 01:29:59.729: INFO: (2) /api/v1/namespaces/proxy-4650/pods/https:proxy-service-zldp4-7ptft:460/proxy/: tls baz (200; 2.684038ms) Oct 23 01:29:59.729: INFO: (2) /api/v1/namespaces/proxy-4650/pods/proxy-service-zldp4-7ptft:160/proxy/: foo (200; 2.526134ms) Oct 23 01:29:59.729: INFO: (2) /api/v1/namespaces/proxy-4650/pods/http:proxy-service-zldp4-7ptft:162/proxy/: bar (200; 2.56727ms) Oct 23 01:29:59.729: INFO: (2) /api/v1/namespaces/proxy-4650/pods/https:proxy-service-zldp4-7ptft:462/proxy/: tls qux (200; 2.605405ms) Oct 23 01:29:59.730: INFO: (2) /api/v1/namespaces/proxy-4650/pods/http:proxy-service-zldp4-7ptft:160/proxy/: foo (200; 3.067912ms) Oct 23 01:29:59.730: INFO: (2) /api/v1/namespaces/proxy-4650/pods/http:proxy-service-zldp4-7ptft:1080/proxy/: ... (200; 2.960766ms) Oct 23 01:29:59.730: INFO: (2) /api/v1/namespaces/proxy-4650/pods/proxy-service-zldp4-7ptft/proxy/: test (200; 3.014323ms) Oct 23 01:29:59.730: INFO: (2) /api/v1/namespaces/proxy-4650/pods/https:proxy-service-zldp4-7ptft:443/proxy/: ... (200; 2.292439ms) Oct 23 01:29:59.734: INFO: (3) /api/v1/namespaces/proxy-4650/pods/https:proxy-service-zldp4-7ptft:460/proxy/: tls baz (200; 2.769665ms) Oct 23 01:29:59.734: INFO: (3) /api/v1/namespaces/proxy-4650/pods/proxy-service-zldp4-7ptft:1080/proxy/: test<... (200; 3.127743ms) Oct 23 01:29:59.734: INFO: (3) /api/v1/namespaces/proxy-4650/pods/proxy-service-zldp4-7ptft/proxy/: test (200; 3.139997ms) Oct 23 01:29:59.734: INFO: (3) /api/v1/namespaces/proxy-4650/pods/proxy-service-zldp4-7ptft:160/proxy/: foo (200; 3.038515ms) Oct 23 01:29:59.734: INFO: (3) /api/v1/namespaces/proxy-4650/services/http:proxy-service-zldp4:portname1/proxy/: foo (200; 3.18334ms) Oct 23 01:29:59.734: INFO: (3) /api/v1/namespaces/proxy-4650/pods/http:proxy-service-zldp4-7ptft:162/proxy/: bar (200; 3.313296ms) Oct 23 01:29:59.735: INFO: (3) /api/v1/namespaces/proxy-4650/services/https:proxy-service-zldp4:tlsportname1/proxy/: tls baz (200; 3.695681ms) Oct 23 01:29:59.735: INFO: (3) /api/v1/namespaces/proxy-4650/pods/http:proxy-service-zldp4-7ptft:160/proxy/: foo (200; 3.528729ms) Oct 23 01:29:59.735: INFO: (3) /api/v1/namespaces/proxy-4650/services/http:proxy-service-zldp4:portname2/proxy/: bar (200; 4.192771ms) Oct 23 01:29:59.735: INFO: (3) /api/v1/namespaces/proxy-4650/services/proxy-service-zldp4:portname1/proxy/: foo (200; 3.962108ms) Oct 23 01:29:59.735: INFO: (3) /api/v1/namespaces/proxy-4650/services/https:proxy-service-zldp4:tlsportname2/proxy/: tls qux (200; 4.419496ms) Oct 23 01:29:59.735: INFO: (3) /api/v1/namespaces/proxy-4650/services/proxy-service-zldp4:portname2/proxy/: bar (200; 4.393166ms) Oct 23 01:29:59.738: INFO: (4) /api/v1/namespaces/proxy-4650/pods/proxy-service-zldp4-7ptft:162/proxy/: bar (200; 1.990382ms) Oct 23 01:29:59.738: INFO: (4) /api/v1/namespaces/proxy-4650/pods/https:proxy-service-zldp4-7ptft:443/proxy/: test<... (200; 2.54647ms) Oct 23 01:29:59.738: INFO: (4) /api/v1/namespaces/proxy-4650/pods/https:proxy-service-zldp4-7ptft:462/proxy/: tls qux (200; 2.623864ms) Oct 23 01:29:59.738: INFO: (4) /api/v1/namespaces/proxy-4650/pods/https:proxy-service-zldp4-7ptft:460/proxy/: tls baz (200; 2.585559ms) Oct 23 01:29:59.738: INFO: (4) /api/v1/namespaces/proxy-4650/pods/http:proxy-service-zldp4-7ptft:1080/proxy/: ... (200; 2.718822ms) Oct 23 01:29:59.739: INFO: (4) /api/v1/namespaces/proxy-4650/pods/proxy-service-zldp4-7ptft/proxy/: test (200; 3.14813ms) Oct 23 01:29:59.739: INFO: (4) /api/v1/namespaces/proxy-4650/services/http:proxy-service-zldp4:portname1/proxy/: foo (200; 3.341224ms) Oct 23 01:29:59.739: INFO: (4) /api/v1/namespaces/proxy-4650/pods/http:proxy-service-zldp4-7ptft:160/proxy/: foo (200; 3.406646ms) Oct 23 01:29:59.739: INFO: (4) /api/v1/namespaces/proxy-4650/services/https:proxy-service-zldp4:tlsportname2/proxy/: tls qux (200; 3.570441ms) Oct 23 01:29:59.739: INFO: (4) /api/v1/namespaces/proxy-4650/services/proxy-service-zldp4:portname2/proxy/: bar (200; 3.649769ms) Oct 23 01:29:59.739: INFO: (4) /api/v1/namespaces/proxy-4650/services/proxy-service-zldp4:portname1/proxy/: foo (200; 3.730088ms) Oct 23 01:29:59.739: INFO: (4) /api/v1/namespaces/proxy-4650/services/https:proxy-service-zldp4:tlsportname1/proxy/: tls baz (200; 3.850525ms) Oct 23 01:29:59.739: INFO: (4) /api/v1/namespaces/proxy-4650/services/http:proxy-service-zldp4:portname2/proxy/: bar (200; 3.822518ms) Oct 23 01:29:59.742: INFO: (5) /api/v1/namespaces/proxy-4650/pods/http:proxy-service-zldp4-7ptft:1080/proxy/: ... (200; 2.078008ms) Oct 23 01:29:59.742: INFO: (5) /api/v1/namespaces/proxy-4650/pods/proxy-service-zldp4-7ptft:1080/proxy/: test<... (200; 2.226325ms) Oct 23 01:29:59.742: INFO: (5) /api/v1/namespaces/proxy-4650/pods/proxy-service-zldp4-7ptft:162/proxy/: bar (200; 2.298267ms) Oct 23 01:29:59.742: INFO: (5) /api/v1/namespaces/proxy-4650/pods/proxy-service-zldp4-7ptft:160/proxy/: foo (200; 2.300988ms) Oct 23 01:29:59.742: INFO: (5) /api/v1/namespaces/proxy-4650/pods/http:proxy-service-zldp4-7ptft:162/proxy/: bar (200; 2.317316ms) Oct 23 01:29:59.742: INFO: (5) /api/v1/namespaces/proxy-4650/pods/http:proxy-service-zldp4-7ptft:160/proxy/: foo (200; 2.532003ms) Oct 23 01:29:59.742: INFO: (5) /api/v1/namespaces/proxy-4650/pods/proxy-service-zldp4-7ptft/proxy/: test (200; 2.666487ms) Oct 23 01:29:59.742: INFO: (5) /api/v1/namespaces/proxy-4650/pods/https:proxy-service-zldp4-7ptft:443/proxy/: test (200; 2.331081ms) Oct 23 01:29:59.746: INFO: (6) /api/v1/namespaces/proxy-4650/pods/http:proxy-service-zldp4-7ptft:162/proxy/: bar (200; 2.340926ms) Oct 23 01:29:59.746: INFO: (6) /api/v1/namespaces/proxy-4650/pods/https:proxy-service-zldp4-7ptft:460/proxy/: tls baz (200; 2.488108ms) Oct 23 01:29:59.746: INFO: (6) /api/v1/namespaces/proxy-4650/pods/proxy-service-zldp4-7ptft:160/proxy/: foo (200; 2.376331ms) Oct 23 01:29:59.746: INFO: (6) /api/v1/namespaces/proxy-4650/pods/http:proxy-service-zldp4-7ptft:1080/proxy/: ... (200; 2.453509ms) Oct 23 01:29:59.746: INFO: (6) /api/v1/namespaces/proxy-4650/pods/https:proxy-service-zldp4-7ptft:462/proxy/: tls qux (200; 2.696486ms) Oct 23 01:29:59.747: INFO: (6) /api/v1/namespaces/proxy-4650/pods/proxy-service-zldp4-7ptft:162/proxy/: bar (200; 2.964573ms) Oct 23 01:29:59.747: INFO: (6) /api/v1/namespaces/proxy-4650/services/https:proxy-service-zldp4:tlsportname2/proxy/: tls qux (200; 2.892929ms) Oct 23 01:29:59.747: INFO: (6) /api/v1/namespaces/proxy-4650/pods/https:proxy-service-zldp4-7ptft:443/proxy/: test<... (200; 3.131593ms) Oct 23 01:29:59.747: INFO: (6) /api/v1/namespaces/proxy-4650/services/https:proxy-service-zldp4:tlsportname1/proxy/: tls baz (200; 3.361671ms) Oct 23 01:29:59.748: INFO: (6) /api/v1/namespaces/proxy-4650/services/proxy-service-zldp4:portname1/proxy/: foo (200; 3.74543ms) Oct 23 01:29:59.748: INFO: (6) /api/v1/namespaces/proxy-4650/services/http:proxy-service-zldp4:portname1/proxy/: foo (200; 3.873948ms) Oct 23 01:29:59.748: INFO: (6) /api/v1/namespaces/proxy-4650/services/proxy-service-zldp4:portname2/proxy/: bar (200; 4.217877ms) Oct 23 01:29:59.748: INFO: (6) /api/v1/namespaces/proxy-4650/services/http:proxy-service-zldp4:portname2/proxy/: bar (200; 4.163984ms) Oct 23 01:29:59.750: INFO: (7) /api/v1/namespaces/proxy-4650/pods/http:proxy-service-zldp4-7ptft:162/proxy/: bar (200; 1.890948ms) Oct 23 01:29:59.750: INFO: (7) /api/v1/namespaces/proxy-4650/pods/proxy-service-zldp4-7ptft:162/proxy/: bar (200; 1.980138ms) Oct 23 01:29:59.751: INFO: (7) /api/v1/namespaces/proxy-4650/pods/https:proxy-service-zldp4-7ptft:462/proxy/: tls qux (200; 2.810997ms) Oct 23 01:29:59.751: INFO: (7) /api/v1/namespaces/proxy-4650/pods/http:proxy-service-zldp4-7ptft:1080/proxy/: ... (200; 2.649386ms) Oct 23 01:29:59.751: INFO: (7) /api/v1/namespaces/proxy-4650/pods/proxy-service-zldp4-7ptft/proxy/: test (200; 2.693688ms) Oct 23 01:29:59.751: INFO: (7) /api/v1/namespaces/proxy-4650/services/proxy-service-zldp4:portname1/proxy/: foo (200; 2.922725ms) Oct 23 01:29:59.751: INFO: (7) /api/v1/namespaces/proxy-4650/pods/https:proxy-service-zldp4-7ptft:460/proxy/: tls baz (200; 2.886408ms) Oct 23 01:29:59.751: INFO: (7) /api/v1/namespaces/proxy-4650/pods/proxy-service-zldp4-7ptft:1080/proxy/: test<... (200; 3.067321ms) Oct 23 01:29:59.751: INFO: (7) /api/v1/namespaces/proxy-4650/services/http:proxy-service-zldp4:portname1/proxy/: foo (200; 3.046263ms) Oct 23 01:29:59.751: INFO: (7) /api/v1/namespaces/proxy-4650/services/https:proxy-service-zldp4:tlsportname1/proxy/: tls baz (200; 3.1715ms) Oct 23 01:29:59.751: INFO: (7) /api/v1/namespaces/proxy-4650/pods/https:proxy-service-zldp4-7ptft:443/proxy/: ... (200; 1.98922ms) Oct 23 01:29:59.754: INFO: (8) /api/v1/namespaces/proxy-4650/pods/proxy-service-zldp4-7ptft:162/proxy/: bar (200; 2.141424ms) Oct 23 01:29:59.754: INFO: (8) /api/v1/namespaces/proxy-4650/pods/https:proxy-service-zldp4-7ptft:460/proxy/: tls baz (200; 2.407982ms) Oct 23 01:29:59.755: INFO: (8) /api/v1/namespaces/proxy-4650/pods/https:proxy-service-zldp4-7ptft:462/proxy/: tls qux (200; 2.630937ms) Oct 23 01:29:59.755: INFO: (8) /api/v1/namespaces/proxy-4650/pods/https:proxy-service-zldp4-7ptft:443/proxy/: test (200; 2.875525ms) Oct 23 01:29:59.755: INFO: (8) /api/v1/namespaces/proxy-4650/pods/http:proxy-service-zldp4-7ptft:160/proxy/: foo (200; 2.898264ms) Oct 23 01:29:59.755: INFO: (8) /api/v1/namespaces/proxy-4650/services/http:proxy-service-zldp4:portname1/proxy/: foo (200; 3.175632ms) Oct 23 01:29:59.755: INFO: (8) /api/v1/namespaces/proxy-4650/pods/proxy-service-zldp4-7ptft:1080/proxy/: test<... (200; 3.088087ms) Oct 23 01:29:59.755: INFO: (8) /api/v1/namespaces/proxy-4650/services/https:proxy-service-zldp4:tlsportname1/proxy/: tls baz (200; 3.60245ms) Oct 23 01:29:59.755: INFO: (8) /api/v1/namespaces/proxy-4650/pods/proxy-service-zldp4-7ptft:160/proxy/: foo (200; 3.603731ms) Oct 23 01:29:59.755: INFO: (8) /api/v1/namespaces/proxy-4650/services/proxy-service-zldp4:portname1/proxy/: foo (200; 3.572295ms) Oct 23 01:29:59.756: INFO: (8) /api/v1/namespaces/proxy-4650/services/http:proxy-service-zldp4:portname2/proxy/: bar (200; 3.725706ms) Oct 23 01:29:59.756: INFO: (8) /api/v1/namespaces/proxy-4650/services/https:proxy-service-zldp4:tlsportname2/proxy/: tls qux (200; 3.993093ms) Oct 23 01:29:59.756: INFO: (8) /api/v1/namespaces/proxy-4650/services/proxy-service-zldp4:portname2/proxy/: bar (200; 4.054784ms) Oct 23 01:29:59.758: INFO: (9) /api/v1/namespaces/proxy-4650/pods/https:proxy-service-zldp4-7ptft:462/proxy/: tls qux (200; 1.924211ms) Oct 23 01:29:59.758: INFO: (9) /api/v1/namespaces/proxy-4650/pods/https:proxy-service-zldp4-7ptft:443/proxy/: ... (200; 2.150903ms) Oct 23 01:29:59.758: INFO: (9) /api/v1/namespaces/proxy-4650/pods/proxy-service-zldp4-7ptft/proxy/: test (200; 2.272564ms) Oct 23 01:29:59.758: INFO: (9) /api/v1/namespaces/proxy-4650/pods/http:proxy-service-zldp4-7ptft:160/proxy/: foo (200; 2.077491ms) Oct 23 01:29:59.759: INFO: (9) /api/v1/namespaces/proxy-4650/pods/proxy-service-zldp4-7ptft:160/proxy/: foo (200; 2.439298ms) Oct 23 01:29:59.759: INFO: (9) /api/v1/namespaces/proxy-4650/pods/http:proxy-service-zldp4-7ptft:162/proxy/: bar (200; 2.398469ms) Oct 23 01:29:59.759: INFO: (9) /api/v1/namespaces/proxy-4650/pods/https:proxy-service-zldp4-7ptft:460/proxy/: tls baz (200; 2.708754ms) Oct 23 01:29:59.759: INFO: (9) /api/v1/namespaces/proxy-4650/pods/proxy-service-zldp4-7ptft:162/proxy/: bar (200; 3.251702ms) Oct 23 01:29:59.759: INFO: (9) /api/v1/namespaces/proxy-4650/pods/proxy-service-zldp4-7ptft:1080/proxy/: test<... (200; 3.156483ms) Oct 23 01:29:59.759: INFO: (9) /api/v1/namespaces/proxy-4650/services/http:proxy-service-zldp4:portname2/proxy/: bar (200; 3.276398ms) Oct 23 01:29:59.760: INFO: (9) /api/v1/namespaces/proxy-4650/services/http:proxy-service-zldp4:portname1/proxy/: foo (200; 3.591921ms) Oct 23 01:29:59.760: INFO: (9) /api/v1/namespaces/proxy-4650/services/proxy-service-zldp4:portname2/proxy/: bar (200; 3.636924ms) Oct 23 01:29:59.760: INFO: (9) /api/v1/namespaces/proxy-4650/services/https:proxy-service-zldp4:tlsportname2/proxy/: tls qux (200; 3.80478ms) Oct 23 01:29:59.760: INFO: (9) /api/v1/namespaces/proxy-4650/services/proxy-service-zldp4:portname1/proxy/: foo (200; 4.02309ms) Oct 23 01:29:59.760: INFO: (9) /api/v1/namespaces/proxy-4650/services/https:proxy-service-zldp4:tlsportname1/proxy/: tls baz (200; 4.061429ms) Oct 23 01:29:59.763: INFO: (10) /api/v1/namespaces/proxy-4650/pods/https:proxy-service-zldp4-7ptft:443/proxy/: test<... (200; 2.810367ms) Oct 23 01:29:59.764: INFO: (10) /api/v1/namespaces/proxy-4650/pods/proxy-service-zldp4-7ptft/proxy/: test (200; 2.910085ms) Oct 23 01:29:59.764: INFO: (10) /api/v1/namespaces/proxy-4650/pods/https:proxy-service-zldp4-7ptft:462/proxy/: tls qux (200; 3.319148ms) Oct 23 01:29:59.764: INFO: (10) /api/v1/namespaces/proxy-4650/pods/http:proxy-service-zldp4-7ptft:160/proxy/: foo (200; 3.321259ms) Oct 23 01:29:59.764: INFO: (10) /api/v1/namespaces/proxy-4650/pods/https:proxy-service-zldp4-7ptft:460/proxy/: tls baz (200; 3.493046ms) Oct 23 01:29:59.764: INFO: (10) /api/v1/namespaces/proxy-4650/pods/proxy-service-zldp4-7ptft:160/proxy/: foo (200; 3.434043ms) Oct 23 01:29:59.764: INFO: (10) /api/v1/namespaces/proxy-4650/pods/http:proxy-service-zldp4-7ptft:1080/proxy/: ... (200; 3.55563ms) Oct 23 01:29:59.764: INFO: (10) /api/v1/namespaces/proxy-4650/pods/http:proxy-service-zldp4-7ptft:162/proxy/: bar (200; 3.591748ms) Oct 23 01:29:59.764: INFO: (10) /api/v1/namespaces/proxy-4650/services/http:proxy-service-zldp4:portname2/proxy/: bar (200; 3.920502ms) Oct 23 01:29:59.764: INFO: (10) /api/v1/namespaces/proxy-4650/services/https:proxy-service-zldp4:tlsportname2/proxy/: tls qux (200; 3.949475ms) Oct 23 01:29:59.764: INFO: (10) /api/v1/namespaces/proxy-4650/services/proxy-service-zldp4:portname1/proxy/: foo (200; 3.819954ms) Oct 23 01:29:59.764: INFO: (10) /api/v1/namespaces/proxy-4650/services/proxy-service-zldp4:portname2/proxy/: bar (200; 3.80833ms) Oct 23 01:29:59.765: INFO: (10) /api/v1/namespaces/proxy-4650/services/http:proxy-service-zldp4:portname1/proxy/: foo (200; 4.181424ms) Oct 23 01:29:59.765: INFO: (10) /api/v1/namespaces/proxy-4650/services/https:proxy-service-zldp4:tlsportname1/proxy/: tls baz (200; 4.159888ms) Oct 23 01:29:59.767: INFO: (11) /api/v1/namespaces/proxy-4650/pods/http:proxy-service-zldp4-7ptft:162/proxy/: bar (200; 2.173067ms) Oct 23 01:29:59.767: INFO: (11) /api/v1/namespaces/proxy-4650/pods/http:proxy-service-zldp4-7ptft:160/proxy/: foo (200; 2.261001ms) Oct 23 01:29:59.767: INFO: (11) /api/v1/namespaces/proxy-4650/pods/https:proxy-service-zldp4-7ptft:460/proxy/: tls baz (200; 2.214137ms) Oct 23 01:29:59.767: INFO: (11) /api/v1/namespaces/proxy-4650/pods/proxy-service-zldp4-7ptft:1080/proxy/: test<... (200; 2.286644ms) Oct 23 01:29:59.767: INFO: (11) /api/v1/namespaces/proxy-4650/pods/proxy-service-zldp4-7ptft/proxy/: test (200; 2.583105ms) Oct 23 01:29:59.768: INFO: (11) /api/v1/namespaces/proxy-4650/pods/https:proxy-service-zldp4-7ptft:443/proxy/: ... (200; 3.001952ms) Oct 23 01:29:59.768: INFO: (11) /api/v1/namespaces/proxy-4650/pods/proxy-service-zldp4-7ptft:160/proxy/: foo (200; 2.972218ms) Oct 23 01:29:59.768: INFO: (11) /api/v1/namespaces/proxy-4650/pods/https:proxy-service-zldp4-7ptft:462/proxy/: tls qux (200; 3.173222ms) Oct 23 01:29:59.768: INFO: (11) /api/v1/namespaces/proxy-4650/services/http:proxy-service-zldp4:portname1/proxy/: foo (200; 3.525842ms) Oct 23 01:29:59.768: INFO: (11) /api/v1/namespaces/proxy-4650/services/https:proxy-service-zldp4:tlsportname2/proxy/: tls qux (200; 3.550203ms) Oct 23 01:29:59.768: INFO: (11) /api/v1/namespaces/proxy-4650/services/https:proxy-service-zldp4:tlsportname1/proxy/: tls baz (200; 3.441147ms) Oct 23 01:29:59.769: INFO: (11) /api/v1/namespaces/proxy-4650/services/proxy-service-zldp4:portname2/proxy/: bar (200; 3.654853ms) Oct 23 01:29:59.769: INFO: (11) /api/v1/namespaces/proxy-4650/services/http:proxy-service-zldp4:portname2/proxy/: bar (200; 3.632447ms) Oct 23 01:29:59.771: INFO: (12) /api/v1/namespaces/proxy-4650/pods/http:proxy-service-zldp4-7ptft:1080/proxy/: ... (200; 2.233439ms) Oct 23 01:29:59.771: INFO: (12) /api/v1/namespaces/proxy-4650/pods/proxy-service-zldp4-7ptft:1080/proxy/: test<... (200; 2.134463ms) Oct 23 01:29:59.771: INFO: (12) /api/v1/namespaces/proxy-4650/pods/http:proxy-service-zldp4-7ptft:160/proxy/: foo (200; 2.307774ms) Oct 23 01:29:59.772: INFO: (12) /api/v1/namespaces/proxy-4650/pods/https:proxy-service-zldp4-7ptft:443/proxy/: test (200; 3.006832ms) Oct 23 01:29:59.772: INFO: (12) /api/v1/namespaces/proxy-4650/pods/https:proxy-service-zldp4-7ptft:460/proxy/: tls baz (200; 3.295523ms) Oct 23 01:29:59.772: INFO: (12) /api/v1/namespaces/proxy-4650/pods/proxy-service-zldp4-7ptft:162/proxy/: bar (200; 3.290764ms) Oct 23 01:29:59.772: INFO: (12) /api/v1/namespaces/proxy-4650/services/proxy-service-zldp4:portname1/proxy/: foo (200; 3.347108ms) Oct 23 01:29:59.772: INFO: (12) /api/v1/namespaces/proxy-4650/pods/http:proxy-service-zldp4-7ptft:162/proxy/: bar (200; 3.162719ms) Oct 23 01:29:59.772: INFO: (12) /api/v1/namespaces/proxy-4650/pods/https:proxy-service-zldp4-7ptft:462/proxy/: tls qux (200; 3.286707ms) Oct 23 01:29:59.772: INFO: (12) /api/v1/namespaces/proxy-4650/services/http:proxy-service-zldp4:portname2/proxy/: bar (200; 3.46714ms) Oct 23 01:29:59.772: INFO: (12) /api/v1/namespaces/proxy-4650/services/proxy-service-zldp4:portname2/proxy/: bar (200; 3.535635ms) Oct 23 01:29:59.772: INFO: (12) /api/v1/namespaces/proxy-4650/services/https:proxy-service-zldp4:tlsportname2/proxy/: tls qux (200; 3.59419ms) Oct 23 01:29:59.773: INFO: (12) /api/v1/namespaces/proxy-4650/services/https:proxy-service-zldp4:tlsportname1/proxy/: tls baz (200; 3.560642ms) Oct 23 01:29:59.773: INFO: (12) /api/v1/namespaces/proxy-4650/services/http:proxy-service-zldp4:portname1/proxy/: foo (200; 3.954187ms) Oct 23 01:29:59.775: INFO: (13) /api/v1/namespaces/proxy-4650/pods/proxy-service-zldp4-7ptft:160/proxy/: foo (200; 2.155378ms) Oct 23 01:29:59.775: INFO: (13) /api/v1/namespaces/proxy-4650/pods/https:proxy-service-zldp4-7ptft:462/proxy/: tls qux (200; 2.105631ms) Oct 23 01:29:59.775: INFO: (13) /api/v1/namespaces/proxy-4650/pods/https:proxy-service-zldp4-7ptft:460/proxy/: tls baz (200; 2.413985ms) Oct 23 01:29:59.775: INFO: (13) /api/v1/namespaces/proxy-4650/pods/https:proxy-service-zldp4-7ptft:443/proxy/: ... (200; 3.175886ms) Oct 23 01:29:59.776: INFO: (13) /api/v1/namespaces/proxy-4650/services/https:proxy-service-zldp4:tlsportname1/proxy/: tls baz (200; 3.220613ms) Oct 23 01:29:59.777: INFO: (13) /api/v1/namespaces/proxy-4650/services/proxy-service-zldp4:portname2/proxy/: bar (200; 3.576068ms) Oct 23 01:29:59.777: INFO: (13) /api/v1/namespaces/proxy-4650/pods/proxy-service-zldp4-7ptft:1080/proxy/: test<... (200; 3.609159ms) Oct 23 01:29:59.777: INFO: (13) /api/v1/namespaces/proxy-4650/pods/proxy-service-zldp4-7ptft/proxy/: test (200; 3.779482ms) Oct 23 01:29:59.777: INFO: (13) /api/v1/namespaces/proxy-4650/services/http:proxy-service-zldp4:portname1/proxy/: foo (200; 4.194606ms) Oct 23 01:29:59.777: INFO: (13) /api/v1/namespaces/proxy-4650/services/https:proxy-service-zldp4:tlsportname2/proxy/: tls qux (200; 4.127896ms) Oct 23 01:29:59.777: INFO: (13) /api/v1/namespaces/proxy-4650/services/proxy-service-zldp4:portname1/proxy/: foo (200; 4.078201ms) Oct 23 01:29:59.777: INFO: (13) /api/v1/namespaces/proxy-4650/services/http:proxy-service-zldp4:portname2/proxy/: bar (200; 4.421864ms) Oct 23 01:29:59.780: INFO: (14) /api/v1/namespaces/proxy-4650/pods/proxy-service-zldp4-7ptft:1080/proxy/: test<... (200; 2.361154ms) Oct 23 01:29:59.780: INFO: (14) /api/v1/namespaces/proxy-4650/pods/proxy-service-zldp4-7ptft:162/proxy/: bar (200; 2.319226ms) Oct 23 01:29:59.780: INFO: (14) /api/v1/namespaces/proxy-4650/pods/proxy-service-zldp4-7ptft:160/proxy/: foo (200; 2.424837ms) Oct 23 01:29:59.780: INFO: (14) /api/v1/namespaces/proxy-4650/pods/http:proxy-service-zldp4-7ptft:162/proxy/: bar (200; 2.530244ms) Oct 23 01:29:59.784: INFO: (14) /api/v1/namespaces/proxy-4650/services/https:proxy-service-zldp4:tlsportname2/proxy/: tls qux (200; 6.893639ms) Oct 23 01:29:59.784: INFO: (14) /api/v1/namespaces/proxy-4650/pods/http:proxy-service-zldp4-7ptft:160/proxy/: foo (200; 6.869971ms) Oct 23 01:29:59.784: INFO: (14) /api/v1/namespaces/proxy-4650/services/http:proxy-service-zldp4:portname2/proxy/: bar (200; 6.769591ms) Oct 23 01:29:59.785: INFO: (14) /api/v1/namespaces/proxy-4650/pods/https:proxy-service-zldp4-7ptft:462/proxy/: tls qux (200; 6.818769ms) Oct 23 01:29:59.785: INFO: (14) /api/v1/namespaces/proxy-4650/services/proxy-service-zldp4:portname1/proxy/: foo (200; 6.773069ms) Oct 23 01:29:59.785: INFO: (14) /api/v1/namespaces/proxy-4650/pods/https:proxy-service-zldp4-7ptft:443/proxy/: ... (200; 6.88676ms) Oct 23 01:29:59.785: INFO: (14) /api/v1/namespaces/proxy-4650/pods/https:proxy-service-zldp4-7ptft:460/proxy/: tls baz (200; 7.113074ms) Oct 23 01:29:59.785: INFO: (14) /api/v1/namespaces/proxy-4650/pods/proxy-service-zldp4-7ptft/proxy/: test (200; 7.049983ms) Oct 23 01:29:59.785: INFO: (14) /api/v1/namespaces/proxy-4650/services/https:proxy-service-zldp4:tlsportname1/proxy/: tls baz (200; 7.034178ms) Oct 23 01:29:59.787: INFO: (15) /api/v1/namespaces/proxy-4650/pods/http:proxy-service-zldp4-7ptft:162/proxy/: bar (200; 2.012349ms) Oct 23 01:29:59.787: INFO: (15) /api/v1/namespaces/proxy-4650/pods/https:proxy-service-zldp4-7ptft:460/proxy/: tls baz (200; 2.116923ms) Oct 23 01:29:59.787: INFO: (15) /api/v1/namespaces/proxy-4650/pods/http:proxy-service-zldp4-7ptft:160/proxy/: foo (200; 2.134838ms) Oct 23 01:29:59.787: INFO: (15) /api/v1/namespaces/proxy-4650/pods/proxy-service-zldp4-7ptft/proxy/: test (200; 2.527571ms) Oct 23 01:29:59.788: INFO: (15) /api/v1/namespaces/proxy-4650/pods/https:proxy-service-zldp4-7ptft:443/proxy/: ... (200; 2.790974ms) Oct 23 01:29:59.788: INFO: (15) /api/v1/namespaces/proxy-4650/pods/proxy-service-zldp4-7ptft:162/proxy/: bar (200; 2.767962ms) Oct 23 01:29:59.788: INFO: (15) /api/v1/namespaces/proxy-4650/pods/https:proxy-service-zldp4-7ptft:462/proxy/: tls qux (200; 2.930612ms) Oct 23 01:29:59.788: INFO: (15) /api/v1/namespaces/proxy-4650/services/https:proxy-service-zldp4:tlsportname1/proxy/: tls baz (200; 3.093102ms) Oct 23 01:29:59.788: INFO: (15) /api/v1/namespaces/proxy-4650/pods/proxy-service-zldp4-7ptft:1080/proxy/: test<... (200; 3.153559ms) Oct 23 01:29:59.788: INFO: (15) /api/v1/namespaces/proxy-4650/services/http:proxy-service-zldp4:portname1/proxy/: foo (200; 3.272112ms) Oct 23 01:29:59.788: INFO: (15) /api/v1/namespaces/proxy-4650/pods/proxy-service-zldp4-7ptft:160/proxy/: foo (200; 3.238526ms) Oct 23 01:29:59.788: INFO: (15) /api/v1/namespaces/proxy-4650/services/https:proxy-service-zldp4:tlsportname2/proxy/: tls qux (200; 3.574005ms) Oct 23 01:29:59.789: INFO: (15) /api/v1/namespaces/proxy-4650/services/proxy-service-zldp4:portname1/proxy/: foo (200; 3.718484ms) Oct 23 01:29:59.789: INFO: (15) /api/v1/namespaces/proxy-4650/services/http:proxy-service-zldp4:portname2/proxy/: bar (200; 3.792193ms) Oct 23 01:29:59.789: INFO: (15) /api/v1/namespaces/proxy-4650/services/proxy-service-zldp4:portname2/proxy/: bar (200; 4.123295ms) Oct 23 01:29:59.791: INFO: (16) /api/v1/namespaces/proxy-4650/pods/https:proxy-service-zldp4-7ptft:460/proxy/: tls baz (200; 1.812956ms) Oct 23 01:29:59.791: INFO: (16) /api/v1/namespaces/proxy-4650/pods/http:proxy-service-zldp4-7ptft:162/proxy/: bar (200; 1.995812ms) Oct 23 01:29:59.792: INFO: (16) /api/v1/namespaces/proxy-4650/pods/proxy-service-zldp4-7ptft:1080/proxy/: test<... (200; 2.366367ms) Oct 23 01:29:59.792: INFO: (16) /api/v1/namespaces/proxy-4650/pods/proxy-service-zldp4-7ptft/proxy/: test (200; 2.554419ms) Oct 23 01:29:59.792: INFO: (16) /api/v1/namespaces/proxy-4650/pods/proxy-service-zldp4-7ptft:162/proxy/: bar (200; 2.749071ms) Oct 23 01:29:59.792: INFO: (16) /api/v1/namespaces/proxy-4650/pods/http:proxy-service-zldp4-7ptft:160/proxy/: foo (200; 3.024913ms) Oct 23 01:29:59.793: INFO: (16) /api/v1/namespaces/proxy-4650/pods/https:proxy-service-zldp4-7ptft:443/proxy/: ... (200; 3.387727ms) Oct 23 01:29:59.793: INFO: (16) /api/v1/namespaces/proxy-4650/services/proxy-service-zldp4:portname2/proxy/: bar (200; 3.468733ms) Oct 23 01:29:59.793: INFO: (16) /api/v1/namespaces/proxy-4650/services/proxy-service-zldp4:portname1/proxy/: foo (200; 3.408258ms) Oct 23 01:29:59.793: INFO: (16) /api/v1/namespaces/proxy-4650/services/http:proxy-service-zldp4:portname1/proxy/: foo (200; 3.352102ms) Oct 23 01:29:59.793: INFO: (16) /api/v1/namespaces/proxy-4650/services/https:proxy-service-zldp4:tlsportname2/proxy/: tls qux (200; 3.60008ms) Oct 23 01:29:59.793: INFO: (16) /api/v1/namespaces/proxy-4650/pods/https:proxy-service-zldp4-7ptft:462/proxy/: tls qux (200; 3.474984ms) Oct 23 01:29:59.793: INFO: (16) /api/v1/namespaces/proxy-4650/services/http:proxy-service-zldp4:portname2/proxy/: bar (200; 3.906813ms) Oct 23 01:29:59.793: INFO: (16) /api/v1/namespaces/proxy-4650/services/https:proxy-service-zldp4:tlsportname1/proxy/: tls baz (200; 3.982798ms) Oct 23 01:29:59.795: INFO: (17) /api/v1/namespaces/proxy-4650/pods/https:proxy-service-zldp4-7ptft:462/proxy/: tls qux (200; 2.091802ms) Oct 23 01:29:59.795: INFO: (17) /api/v1/namespaces/proxy-4650/pods/proxy-service-zldp4-7ptft:162/proxy/: bar (200; 1.951529ms) Oct 23 01:29:59.795: INFO: (17) /api/v1/namespaces/proxy-4650/pods/http:proxy-service-zldp4-7ptft:162/proxy/: bar (200; 2.157649ms) Oct 23 01:29:59.796: INFO: (17) /api/v1/namespaces/proxy-4650/pods/https:proxy-service-zldp4-7ptft:443/proxy/: test<... (200; 2.450348ms) Oct 23 01:29:59.796: INFO: (17) /api/v1/namespaces/proxy-4650/pods/http:proxy-service-zldp4-7ptft:160/proxy/: foo (200; 2.634796ms) Oct 23 01:29:59.796: INFO: (17) /api/v1/namespaces/proxy-4650/pods/https:proxy-service-zldp4-7ptft:460/proxy/: tls baz (200; 2.663231ms) Oct 23 01:29:59.796: INFO: (17) /api/v1/namespaces/proxy-4650/pods/proxy-service-zldp4-7ptft/proxy/: test (200; 3.078834ms) Oct 23 01:29:59.796: INFO: (17) /api/v1/namespaces/proxy-4650/pods/proxy-service-zldp4-7ptft:160/proxy/: foo (200; 3.093963ms) Oct 23 01:29:59.797: INFO: (17) /api/v1/namespaces/proxy-4650/pods/http:proxy-service-zldp4-7ptft:1080/proxy/: ... (200; 3.040435ms) Oct 23 01:29:59.797: INFO: (17) /api/v1/namespaces/proxy-4650/services/http:proxy-service-zldp4:portname1/proxy/: foo (200; 3.03462ms) Oct 23 01:29:59.797: INFO: (17) /api/v1/namespaces/proxy-4650/services/proxy-service-zldp4:portname1/proxy/: foo (200; 3.243813ms) Oct 23 01:29:59.797: INFO: (17) /api/v1/namespaces/proxy-4650/services/https:proxy-service-zldp4:tlsportname2/proxy/: tls qux (200; 3.284151ms) Oct 23 01:29:59.797: INFO: (17) /api/v1/namespaces/proxy-4650/services/http:proxy-service-zldp4:portname2/proxy/: bar (200; 3.598058ms) Oct 23 01:29:59.797: INFO: (17) /api/v1/namespaces/proxy-4650/services/proxy-service-zldp4:portname2/proxy/: bar (200; 4.035927ms) Oct 23 01:29:59.797: INFO: (17) /api/v1/namespaces/proxy-4650/services/https:proxy-service-zldp4:tlsportname1/proxy/: tls baz (200; 3.991479ms) Oct 23 01:29:59.799: INFO: (18) /api/v1/namespaces/proxy-4650/pods/proxy-service-zldp4-7ptft:160/proxy/: foo (200; 2.028611ms) Oct 23 01:29:59.800: INFO: (18) /api/v1/namespaces/proxy-4650/pods/http:proxy-service-zldp4-7ptft:162/proxy/: bar (200; 1.969689ms) Oct 23 01:29:59.800: INFO: (18) /api/v1/namespaces/proxy-4650/pods/http:proxy-service-zldp4-7ptft:1080/proxy/: ... (200; 2.108891ms) Oct 23 01:29:59.800: INFO: (18) /api/v1/namespaces/proxy-4650/pods/https:proxy-service-zldp4-7ptft:460/proxy/: tls baz (200; 2.50608ms) Oct 23 01:29:59.800: INFO: (18) /api/v1/namespaces/proxy-4650/pods/proxy-service-zldp4-7ptft/proxy/: test (200; 2.419835ms) Oct 23 01:29:59.800: INFO: (18) /api/v1/namespaces/proxy-4650/pods/http:proxy-service-zldp4-7ptft:160/proxy/: foo (200; 2.557011ms) Oct 23 01:29:59.801: INFO: (18) /api/v1/namespaces/proxy-4650/services/proxy-service-zldp4:portname2/proxy/: bar (200; 2.886997ms) Oct 23 01:29:59.801: INFO: (18) /api/v1/namespaces/proxy-4650/pods/proxy-service-zldp4-7ptft:162/proxy/: bar (200; 2.964242ms) Oct 23 01:29:59.801: INFO: (18) /api/v1/namespaces/proxy-4650/pods/https:proxy-service-zldp4-7ptft:462/proxy/: tls qux (200; 3.08447ms) Oct 23 01:29:59.801: INFO: (18) /api/v1/namespaces/proxy-4650/pods/https:proxy-service-zldp4-7ptft:443/proxy/: test<... (200; 3.701384ms) Oct 23 01:29:59.801: INFO: (18) /api/v1/namespaces/proxy-4650/services/https:proxy-service-zldp4:tlsportname2/proxy/: tls qux (200; 3.82041ms) Oct 23 01:29:59.802: INFO: (18) /api/v1/namespaces/proxy-4650/services/https:proxy-service-zldp4:tlsportname1/proxy/: tls baz (200; 4.008179ms) Oct 23 01:29:59.802: INFO: (18) /api/v1/namespaces/proxy-4650/services/http:proxy-service-zldp4:portname1/proxy/: foo (200; 4.020024ms) Oct 23 01:29:59.802: INFO: (18) /api/v1/namespaces/proxy-4650/services/proxy-service-zldp4:portname1/proxy/: foo (200; 3.939978ms) Oct 23 01:29:59.804: INFO: (19) /api/v1/namespaces/proxy-4650/pods/proxy-service-zldp4-7ptft/proxy/: test (200; 2.005951ms) Oct 23 01:29:59.804: INFO: (19) /api/v1/namespaces/proxy-4650/pods/proxy-service-zldp4-7ptft:1080/proxy/: test<... (200; 2.158626ms) Oct 23 01:29:59.804: INFO: (19) /api/v1/namespaces/proxy-4650/pods/http:proxy-service-zldp4-7ptft:160/proxy/: foo (200; 2.319339ms) Oct 23 01:29:59.804: INFO: (19) /api/v1/namespaces/proxy-4650/pods/proxy-service-zldp4-7ptft:162/proxy/: bar (200; 2.296495ms) Oct 23 01:29:59.804: INFO: (19) /api/v1/namespaces/proxy-4650/pods/http:proxy-service-zldp4-7ptft:162/proxy/: bar (200; 2.549351ms) Oct 23 01:29:59.804: INFO: (19) /api/v1/namespaces/proxy-4650/pods/http:proxy-service-zldp4-7ptft:1080/proxy/: ... (200; 2.670499ms) Oct 23 01:29:59.804: INFO: (19) /api/v1/namespaces/proxy-4650/pods/proxy-service-zldp4-7ptft:160/proxy/: foo (200; 2.656342ms) Oct 23 01:29:59.805: INFO: (19) /api/v1/namespaces/proxy-4650/pods/https:proxy-service-zldp4-7ptft:462/proxy/: tls qux (200; 2.809487ms) Oct 23 01:29:59.805: INFO: (19) /api/v1/namespaces/proxy-4650/services/https:proxy-service-zldp4:tlsportname1/proxy/: tls baz (200; 3.2086ms) Oct 23 01:29:59.805: INFO: (19) /api/v1/namespaces/proxy-4650/pods/https:proxy-service-zldp4-7ptft:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a service nodeport-service with the type=NodePort in namespace services-9971 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-9971 STEP: creating replication controller externalsvc in namespace services-9971 I1023 01:29:40.757656 35 runners.go:190] Created replication controller with name: externalsvc, namespace: services-9971, replica count: 2 I1023 01:29:43.809252 35 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1023 01:29:46.810704 35 runners.go:190] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1023 01:29:49.811690 35 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Oct 23 01:29:49.826: INFO: Creating new exec pod Oct 23 01:29:55.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9971 exec execpodmfpxx -- /bin/sh -x -c nslookup nodeport-service.services-9971.svc.cluster.local' Oct 23 01:29:56.139: INFO: stderr: "+ nslookup nodeport-service.services-9971.svc.cluster.local\n" Oct 23 01:29:56.139: INFO: stdout: "Server:\t\t10.233.0.3\nAddress:\t10.233.0.3#53\n\nnodeport-service.services-9971.svc.cluster.local\tcanonical name = externalsvc.services-9971.svc.cluster.local.\nName:\texternalsvc.services-9971.svc.cluster.local\nAddress: 10.233.56.230\n\n" STEP: deleting ReplicationController externalsvc in namespace services-9971, will wait for the garbage collector to delete the pods Oct 23 01:29:56.198: INFO: Deleting ReplicationController externalsvc took: 4.483854ms Oct 23 01:29:56.299: INFO: Terminating ReplicationController externalsvc pods took: 101.199432ms Oct 23 01:30:13.909: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:30:13.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9971" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:33.207 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":-1,"completed":12,"skipped":265,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:30:13.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test env composition Oct 23 01:30:13.930: INFO: Waiting up to 5m0s for pod "var-expansion-789547c0-8222-421a-8934-f45d1b5d8898" in namespace "var-expansion-3044" to be "Succeeded or Failed" Oct 23 01:30:13.935: INFO: Pod "var-expansion-789547c0-8222-421a-8934-f45d1b5d8898": Phase="Pending", Reason="", readiness=false. Elapsed: 4.847897ms Oct 23 01:30:15.940: INFO: Pod "var-expansion-789547c0-8222-421a-8934-f45d1b5d8898": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009410814s Oct 23 01:30:17.944: INFO: Pod "var-expansion-789547c0-8222-421a-8934-f45d1b5d8898": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013464014s STEP: Saw pod success Oct 23 01:30:17.944: INFO: Pod "var-expansion-789547c0-8222-421a-8934-f45d1b5d8898" satisfied condition "Succeeded or Failed" Oct 23 01:30:17.947: INFO: Trying to get logs from node node2 pod var-expansion-789547c0-8222-421a-8934-f45d1b5d8898 container dapi-container: STEP: delete the pod Oct 23 01:30:17.992: INFO: Waiting for pod var-expansion-789547c0-8222-421a-8934-f45d1b5d8898 to disappear Oct 23 01:30:17.994: INFO: Pod var-expansion-789547c0-8222-421a-8934-f45d1b5d8898 no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:30:17.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3044" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":25,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:30:12.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 01:30:12.145: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:30:18.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8468" for this suite. • [SLOW TEST:6.050 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 creating/deleting custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":-1,"completed":8,"skipped":160,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:30:14.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9233.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-9233.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9233.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9233.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-9233.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9233.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 23 01:30:20.116: INFO: DNS probes using dns-9233/dns-test-dfb92344-05ef-45a5-bdc7-6ae6fa231d1c succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:30:20.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9233" for this suite. • [SLOW TEST:6.085 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":-1,"completed":13,"skipped":320,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:30:18.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:293 [It] should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a replication controller Oct 23 01:30:18.053: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4146 create -f -' Oct 23 01:30:18.441: INFO: stderr: "" Oct 23 01:30:18.441: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Oct 23 01:30:18.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4146 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Oct 23 01:30:18.612: INFO: stderr: "" Oct 23 01:30:18.612: INFO: stdout: "update-demo-nautilus-6f7m6 update-demo-nautilus-8xcx9 " Oct 23 01:30:18.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4146 get pods update-demo-nautilus-6f7m6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Oct 23 01:30:18.782: INFO: stderr: "" Oct 23 01:30:18.782: INFO: stdout: "" Oct 23 01:30:18.782: INFO: update-demo-nautilus-6f7m6 is created but not running Oct 23 01:30:23.783: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4146 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Oct 23 01:30:23.946: INFO: stderr: "" Oct 23 01:30:23.946: INFO: stdout: "update-demo-nautilus-6f7m6 update-demo-nautilus-8xcx9 " Oct 23 01:30:23.946: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4146 get pods update-demo-nautilus-6f7m6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Oct 23 01:30:24.083: INFO: stderr: "" Oct 23 01:30:24.083: INFO: stdout: "true" Oct 23 01:30:24.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4146 get pods update-demo-nautilus-6f7m6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Oct 23 01:30:24.234: INFO: stderr: "" Oct 23 01:30:24.234: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Oct 23 01:30:24.234: INFO: validating pod update-demo-nautilus-6f7m6 Oct 23 01:30:24.237: INFO: got data: { "image": "nautilus.jpg" } Oct 23 01:30:24.238: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Oct 23 01:30:24.238: INFO: update-demo-nautilus-6f7m6 is verified up and running Oct 23 01:30:24.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4146 get pods update-demo-nautilus-8xcx9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Oct 23 01:30:24.401: INFO: stderr: "" Oct 23 01:30:24.401: INFO: stdout: "true" Oct 23 01:30:24.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4146 get pods update-demo-nautilus-8xcx9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Oct 23 01:30:24.567: INFO: stderr: "" Oct 23 01:30:24.567: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Oct 23 01:30:24.567: INFO: validating pod update-demo-nautilus-8xcx9 Oct 23 01:30:24.571: INFO: got data: { "image": "nautilus.jpg" } Oct 23 01:30:24.571: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Oct 23 01:30:24.571: INFO: update-demo-nautilus-8xcx9 is verified up and running STEP: scaling down the replication controller Oct 23 01:30:24.581: INFO: scanned /root for discovery docs: Oct 23 01:30:24.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4146 scale rc update-demo-nautilus --replicas=1 --timeout=5m' Oct 23 01:30:24.782: INFO: stderr: "" Oct 23 01:30:24.782: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Oct 23 01:30:24.782: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4146 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Oct 23 01:30:24.955: INFO: stderr: "" Oct 23 01:30:24.955: INFO: stdout: "update-demo-nautilus-6f7m6 update-demo-nautilus-8xcx9 " STEP: Replicas for name=update-demo: expected=1 actual=2 Oct 23 01:30:29.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4146 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Oct 23 01:30:30.129: INFO: stderr: "" Oct 23 01:30:30.129: INFO: stdout: "update-demo-nautilus-6f7m6 " Oct 23 01:30:30.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4146 get pods update-demo-nautilus-6f7m6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Oct 23 01:30:30.300: INFO: stderr: "" Oct 23 01:30:30.300: INFO: stdout: "true" Oct 23 01:30:30.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4146 get pods update-demo-nautilus-6f7m6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Oct 23 01:30:30.466: INFO: stderr: "" Oct 23 01:30:30.466: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Oct 23 01:30:30.466: INFO: validating pod update-demo-nautilus-6f7m6 Oct 23 01:30:30.469: INFO: got data: { "image": "nautilus.jpg" } Oct 23 01:30:30.470: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Oct 23 01:30:30.470: INFO: update-demo-nautilus-6f7m6 is verified up and running STEP: scaling up the replication controller Oct 23 01:30:30.479: INFO: scanned /root for discovery docs: Oct 23 01:30:30.479: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4146 scale rc update-demo-nautilus --replicas=2 --timeout=5m' Oct 23 01:30:30.683: INFO: stderr: "" Oct 23 01:30:30.683: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Oct 23 01:30:30.683: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4146 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Oct 23 01:30:30.845: INFO: stderr: "" Oct 23 01:30:30.845: INFO: stdout: "update-demo-nautilus-6f7m6 update-demo-nautilus-6hkrj " Oct 23 01:30:30.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4146 get pods update-demo-nautilus-6f7m6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Oct 23 01:30:31.015: INFO: stderr: "" Oct 23 01:30:31.015: INFO: stdout: "true" Oct 23 01:30:31.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4146 get pods update-demo-nautilus-6f7m6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Oct 23 01:30:31.169: INFO: stderr: "" Oct 23 01:30:31.169: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Oct 23 01:30:31.169: INFO: validating pod update-demo-nautilus-6f7m6 Oct 23 01:30:31.172: INFO: got data: { "image": "nautilus.jpg" } Oct 23 01:30:31.172: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Oct 23 01:30:31.172: INFO: update-demo-nautilus-6f7m6 is verified up and running Oct 23 01:30:31.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4146 get pods update-demo-nautilus-6hkrj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Oct 23 01:30:31.340: INFO: stderr: "" Oct 23 01:30:31.340: INFO: stdout: "" Oct 23 01:30:31.340: INFO: update-demo-nautilus-6hkrj is created but not running Oct 23 01:30:36.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4146 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Oct 23 01:30:36.519: INFO: stderr: "" Oct 23 01:30:36.519: INFO: stdout: "update-demo-nautilus-6f7m6 update-demo-nautilus-6hkrj " Oct 23 01:30:36.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4146 get pods update-demo-nautilus-6f7m6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Oct 23 01:30:36.683: INFO: stderr: "" Oct 23 01:30:36.683: INFO: stdout: "true" Oct 23 01:30:36.683: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4146 get pods update-demo-nautilus-6f7m6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Oct 23 01:30:36.866: INFO: stderr: "" Oct 23 01:30:36.866: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Oct 23 01:30:36.866: INFO: validating pod update-demo-nautilus-6f7m6 Oct 23 01:30:36.869: INFO: got data: { "image": "nautilus.jpg" } Oct 23 01:30:36.869: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Oct 23 01:30:36.869: INFO: update-demo-nautilus-6f7m6 is verified up and running Oct 23 01:30:36.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4146 get pods update-demo-nautilus-6hkrj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Oct 23 01:30:37.024: INFO: stderr: "" Oct 23 01:30:37.024: INFO: stdout: "true" Oct 23 01:30:37.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4146 get pods update-demo-nautilus-6hkrj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Oct 23 01:30:37.188: INFO: stderr: "" Oct 23 01:30:37.188: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Oct 23 01:30:37.188: INFO: validating pod update-demo-nautilus-6hkrj Oct 23 01:30:37.193: INFO: got data: { "image": "nautilus.jpg" } Oct 23 01:30:37.193: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Oct 23 01:30:37.193: INFO: update-demo-nautilus-6hkrj is verified up and running STEP: using delete to clean up resources Oct 23 01:30:37.193: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4146 delete --grace-period=0 --force -f -' Oct 23 01:30:37.327: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 23 01:30:37.327: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Oct 23 01:30:37.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4146 get rc,svc -l name=update-demo --no-headers' Oct 23 01:30:37.528: INFO: stderr: "No resources found in kubectl-4146 namespace.\n" Oct 23 01:30:37.528: INFO: stdout: "" Oct 23 01:30:37.528: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4146 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Oct 23 01:30:37.699: INFO: stderr: "" Oct 23 01:30:37.699: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:30:37.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4146" for this suite. • [SLOW TEST:19.678 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:291 should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":-1,"completed":6,"skipped":36,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:30:37.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 01:30:37.772: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Oct 23 01:30:37.777: INFO: Pod name sample-pod: Found 0 pods out of 1 Oct 23 01:30:42.785: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Oct 23 01:30:42.785: INFO: Creating deployment "test-rolling-update-deployment" Oct 23 01:30:42.789: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Oct 23 01:30:42.793: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Oct 23 01:30:44.801: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Oct 23 01:30:44.803: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549442, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549442, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549442, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549442, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-585b757574\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 23 01:30:46.809: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Oct 23 01:30:46.817: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-6349 f1bf016e-8f7a-4839-9fa4-bacad8e21057 89566 1 2021-10-23 01:30:42 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2021-10-23 01:30:42 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-10-23 01:30:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00341d5c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-10-23 01:30:42 +0000 UTC,LastTransitionTime:2021-10-23 01:30:42 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-585b757574" has successfully progressed.,LastUpdateTime:2021-10-23 01:30:45 +0000 UTC,LastTransitionTime:2021-10-23 01:30:42 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Oct 23 01:30:46.819: INFO: New ReplicaSet "test-rolling-update-deployment-585b757574" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-585b757574 deployment-6349 7f407268-9de9-422a-9fdc-3d0ecef5e793 89552 1 2021-10-23 01:30:42 +0000 UTC map[name:sample-pod pod-template-hash:585b757574] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment f1bf016e-8f7a-4839-9fa4-bacad8e21057 0xc004518717 0xc004518718}] [] [{kube-controller-manager Update apps/v1 2021-10-23 01:30:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f1bf016e-8f7a-4839-9fa4-bacad8e21057\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 585b757574,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:585b757574] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0045187a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Oct 23 01:30:46.819: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Oct 23 01:30:46.819: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-6349 345b4bfa-902e-4077-9fc7-1eb8bc9a3bf5 89565 2 2021-10-23 01:30:37 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment f1bf016e-8f7a-4839-9fa4-bacad8e21057 0xc004518607 0xc004518608}] [] [{e2e.test Update apps/v1 2021-10-23 01:30:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-10-23 01:30:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f1bf016e-8f7a-4839-9fa4-bacad8e21057\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0045186a8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Oct 23 01:30:46.822: INFO: Pod "test-rolling-update-deployment-585b757574-65gsk" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-585b757574-65gsk test-rolling-update-deployment-585b757574- deployment-6349 ee7be2c7-1f1d-42bf-b0d4-03b43a46abdd 89551 0 2021-10-23 01:30:42 +0000 UTC map[name:sample-pod pod-template-hash:585b757574] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.37" ], "mac": "46:9a:f6:10:05:e8", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.37" ], "mac": "46:9a:f6:10:05:e8", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-rolling-update-deployment-585b757574 7f407268-9de9-422a-9fdc-3d0ecef5e793 0xc004518bbf 0xc004518bd0}] [] [{kube-controller-manager Update v1 2021-10-23 01:30:42 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7f407268-9de9-422a-9fdc-3d0ecef5e793\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-10-23 01:30:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-10-23 01:30:45 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.37\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-w7jrz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w7jrz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:30:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:30:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:30:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:30:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.37,StartTime:2021-10-23 01:30:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-23 01:30:44 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:docker://32bc60c0f66b2f0fe06873776a0e1c80196a832f05e0f69a4d0c2b2564f4a2ae,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.37,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:30:46.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6349" for this suite. • [SLOW TEST:9.085 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":7,"skipped":52,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:30:46.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating replication controller my-hostname-basic-7cf9e60f-44ac-4ace-9e96-9a1b0c485ce7 Oct 23 01:30:46.922: INFO: Pod name my-hostname-basic-7cf9e60f-44ac-4ace-9e96-9a1b0c485ce7: Found 0 pods out of 1 Oct 23 01:30:51.929: INFO: Pod name my-hostname-basic-7cf9e60f-44ac-4ace-9e96-9a1b0c485ce7: Found 1 pods out of 1 Oct 23 01:30:51.929: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-7cf9e60f-44ac-4ace-9e96-9a1b0c485ce7" are running Oct 23 01:30:51.933: INFO: Pod "my-hostname-basic-7cf9e60f-44ac-4ace-9e96-9a1b0c485ce7-jfp6r" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-23 01:30:46 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-23 01:30:50 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-23 01:30:50 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-23 01:30:46 +0000 UTC Reason: Message:}]) Oct 23 01:30:51.934: INFO: Trying to dial the pod Oct 23 01:30:56.943: INFO: Controller my-hostname-basic-7cf9e60f-44ac-4ace-9e96-9a1b0c485ce7: Got expected result from replica 1 [my-hostname-basic-7cf9e60f-44ac-4ace-9e96-9a1b0c485ce7-jfp6r]: "my-hostname-basic-7cf9e60f-44ac-4ace-9e96-9a1b0c485ce7-jfp6r", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:30:56.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-760" for this suite. • [SLOW TEST:10.063 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":-1,"completed":8,"skipped":79,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:29:11.229: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W1023 01:29:11.253829 22 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should schedule multiple jobs concurrently [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a cronjob STEP: Ensuring more than one job is running at a time STEP: Ensuring at least two running jobs exists by listing jobs explicitly STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:31:01.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-8164" for this suite. • [SLOW TEST:110.050 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should schedule multiple jobs concurrently [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","total":-1,"completed":8,"skipped":117,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:31:01.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:31:08.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4871" for this suite. • [SLOW TEST:7.042 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":-1,"completed":9,"skipped":130,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:31:08.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check is all data is printed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 01:31:08.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8339 version' Oct 23 01:31:08.502: INFO: stderr: "" Oct 23 01:31:08.502: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"21\", GitVersion:\"v1.21.5\", GitCommit:\"aea7bbadd2fc0cd689de94a54e5b7b758869d691\", GitTreeState:\"clean\", BuildDate:\"2021-09-15T21:10:45Z\", GoVersion:\"go1.16.8\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"21\", GitVersion:\"v1.21.1\", GitCommit:\"5e58841cce77d4bc13713ad2b91fa0d961e69192\", GitTreeState:\"clean\", BuildDate:\"2021-05-12T14:12:29Z\", GoVersion:\"go1.16.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:31:08.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8339" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":-1,"completed":10,"skipped":135,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:31:08.609: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0666 on node default medium Oct 23 01:31:08.642: INFO: Waiting up to 5m0s for pod "pod-64070bbc-2ca7-4382-8e38-8edae4f5eeb5" in namespace "emptydir-3266" to be "Succeeded or Failed" Oct 23 01:31:08.646: INFO: Pod "pod-64070bbc-2ca7-4382-8e38-8edae4f5eeb5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.672242ms Oct 23 01:31:10.650: INFO: Pod "pod-64070bbc-2ca7-4382-8e38-8edae4f5eeb5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007818042s Oct 23 01:31:12.654: INFO: Pod "pod-64070bbc-2ca7-4382-8e38-8edae4f5eeb5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0120029s Oct 23 01:31:14.658: INFO: Pod "pod-64070bbc-2ca7-4382-8e38-8edae4f5eeb5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015924375s Oct 23 01:31:16.664: INFO: Pod "pod-64070bbc-2ca7-4382-8e38-8edae4f5eeb5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.021740127s STEP: Saw pod success Oct 23 01:31:16.664: INFO: Pod "pod-64070bbc-2ca7-4382-8e38-8edae4f5eeb5" satisfied condition "Succeeded or Failed" Oct 23 01:31:16.667: INFO: Trying to get logs from node node1 pod pod-64070bbc-2ca7-4382-8e38-8edae4f5eeb5 container test-container: STEP: delete the pod Oct 23 01:31:16.682: INFO: Waiting for pod pod-64070bbc-2ca7-4382-8e38-8edae4f5eeb5 to disappear Oct 23 01:31:16.684: INFO: Pod pod-64070bbc-2ca7-4382-8e38-8edae4f5eeb5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:31:16.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3266" for this suite. • [SLOW TEST:8.081 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":175,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:29:37.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-8795 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a new StatefulSet Oct 23 01:29:37.279: INFO: Found 0 stateful pods, waiting for 3 Oct 23 01:29:47.285: INFO: Found 2 stateful pods, waiting for 3 Oct 23 01:29:57.285: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Oct 23 01:29:57.285: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Oct 23 01:29:57.285: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-1 Oct 23 01:29:57.309: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Oct 23 01:30:07.339: INFO: Updating stateful set ss2 Oct 23 01:30:07.344: INFO: Waiting for Pod statefulset-8795/ss2-2 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 STEP: Restoring Pods to the correct revision when they are deleted Oct 23 01:30:17.366: INFO: Found 1 stateful pods, waiting for 3 Oct 23 01:30:27.373: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Oct 23 01:30:27.373: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Oct 23 01:30:27.373: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Oct 23 01:30:27.395: INFO: Updating stateful set ss2 Oct 23 01:30:27.401: INFO: Waiting for Pod statefulset-8795/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Oct 23 01:30:37.427: INFO: Updating stateful set ss2 Oct 23 01:30:37.433: INFO: Waiting for StatefulSet statefulset-8795/ss2 to complete update Oct 23 01:30:37.433: INFO: Waiting for Pod statefulset-8795/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Oct 23 01:30:47.440: INFO: Deleting all statefulset in ns statefulset-8795 Oct 23 01:30:47.443: INFO: Scaling statefulset ss2 to 0 Oct 23 01:31:17.459: INFO: Waiting for statefulset status.replicas updated to 0 Oct 23 01:31:17.461: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:31:17.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8795" for this suite. • [SLOW TEST:100.233 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":-1,"completed":8,"skipped":109,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:31:17.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Pods Set QOS Class /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:150 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:31:17.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7578" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":-1,"completed":9,"skipped":116,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:29:47.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name s-test-opt-del-5749bbfd-805d-4801-9383-4590dcc8940b STEP: Creating secret with name s-test-opt-upd-4152f6ae-a948-469b-8b2c-d7f0051bfd78 STEP: Creating the pod Oct 23 01:29:47.492: INFO: The status of Pod pod-projected-secrets-096b05f9-877f-4c76-ae5d-34ee51f07d61 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:29:49.496: INFO: The status of Pod pod-projected-secrets-096b05f9-877f-4c76-ae5d-34ee51f07d61 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:29:51.496: INFO: The status of Pod pod-projected-secrets-096b05f9-877f-4c76-ae5d-34ee51f07d61 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:29:53.495: INFO: The status of Pod pod-projected-secrets-096b05f9-877f-4c76-ae5d-34ee51f07d61 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:29:55.496: INFO: The status of Pod pod-projected-secrets-096b05f9-877f-4c76-ae5d-34ee51f07d61 is Running (Ready = true) STEP: Deleting secret s-test-opt-del-5749bbfd-805d-4801-9383-4590dcc8940b STEP: Updating secret s-test-opt-upd-4152f6ae-a948-469b-8b2c-d7f0051bfd78 STEP: Creating secret with name s-test-opt-create-e85db5f1-a541-4036-bf96-36bf1088c5d6 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:31:22.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4743" for this suite. • [SLOW TEST:95.563 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":75,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:31:17.697: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0666 on node default medium Oct 23 01:31:17.732: INFO: Waiting up to 5m0s for pod "pod-90040076-0a40-46c8-a1fc-6fa75f292788" in namespace "emptydir-4994" to be "Succeeded or Failed" Oct 23 01:31:17.736: INFO: Pod "pod-90040076-0a40-46c8-a1fc-6fa75f292788": Phase="Pending", Reason="", readiness=false. Elapsed: 3.109094ms Oct 23 01:31:19.742: INFO: Pod "pod-90040076-0a40-46c8-a1fc-6fa75f292788": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009228117s Oct 23 01:31:21.745: INFO: Pod "pod-90040076-0a40-46c8-a1fc-6fa75f292788": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012265612s Oct 23 01:31:23.750: INFO: Pod "pod-90040076-0a40-46c8-a1fc-6fa75f292788": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017306466s STEP: Saw pod success Oct 23 01:31:23.750: INFO: Pod "pod-90040076-0a40-46c8-a1fc-6fa75f292788" satisfied condition "Succeeded or Failed" Oct 23 01:31:23.752: INFO: Trying to get logs from node node2 pod pod-90040076-0a40-46c8-a1fc-6fa75f292788 container test-container: STEP: delete the pod Oct 23 01:31:23.800: INFO: Waiting for pod pod-90040076-0a40-46c8-a1fc-6fa75f292788 to disappear Oct 23 01:31:23.802: INFO: Pod pod-90040076-0a40-46c8-a1fc-6fa75f292788 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:31:23.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4994" for this suite. • [SLOW TEST:6.115 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":188,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:31:23.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 01:31:23.050: INFO: The status of Pod pod-secrets-6967f47a-1f98-4bef-86d6-0debe4dc1cba is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:31:25.055: INFO: The status of Pod pod-secrets-6967f47a-1f98-4bef-86d6-0debe4dc1cba is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:31:27.055: INFO: The status of Pod pod-secrets-6967f47a-1f98-4bef-86d6-0debe4dc1cba is Running (Ready = true) STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:31:27.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-1738" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":-1,"completed":8,"skipped":80,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:31:27.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should complete a service status lifecycle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Service STEP: watching for the Service to be added Oct 23 01:31:27.153: INFO: Found Service test-service-bzqzf in namespace services-9560 with labels: map[test-service-static:true] & ports [{http TCP 80 {0 80 } 0}] Oct 23 01:31:27.154: INFO: Service test-service-bzqzf created STEP: Getting /status Oct 23 01:31:27.157: INFO: Service test-service-bzqzf has LoadBalancer: {[]} STEP: patching the ServiceStatus STEP: watching for the Service to be patched Oct 23 01:31:27.162: INFO: observed Service test-service-bzqzf in namespace services-9560 with annotations: map[] & LoadBalancer: {[]} Oct 23 01:31:27.162: INFO: Found Service test-service-bzqzf in namespace services-9560 with annotations: map[patchedstatus:true] & LoadBalancer: {[{203.0.113.1 []}]} Oct 23 01:31:27.162: INFO: Service test-service-bzqzf has service status patched STEP: updating the ServiceStatus Oct 23 01:31:27.168: INFO: updatedStatus.Conditions: []v1.Condition{v1.Condition{Type:"StatusUpdate", Status:"True", ObservedGeneration:0, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"E2E", Message:"Set from e2e test"}} STEP: watching for the Service to be updated Oct 23 01:31:27.169: INFO: Observed Service test-service-bzqzf in namespace services-9560 with annotations: map[] & Conditions: {[]} Oct 23 01:31:27.169: INFO: Observed event: &Service{ObjectMeta:{test-service-bzqzf services-9560 ec38432b-2bed-4154-850e-d1588c25e68b 90151 0 2021-10-23 01:31:27 +0000 UTC map[test-service-static:true] map[patchedstatus:true] [] [] [{e2e.test Update v1 2021-10-23 01:31:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:patchedstatus":{}},"f:labels":{".":{},"f:test-service-static":{}}},"f:spec":{"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:sessionAffinity":{},"f:type":{}},"f:status":{"f:loadBalancer":{"f:ingress":{}}}}}]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{0 80 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.233.40.124,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,TopologyKeys:[],IPFamilyPolicy:*SingleStack,ClusterIPs:[10.233.40.124],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{LoadBalancerIngress{IP:203.0.113.1,Hostname:,Ports:[]PortStatus{},},},},Conditions:[]Condition{},},} Oct 23 01:31:27.170: INFO: Found Service test-service-bzqzf in namespace services-9560 with annotations: map[patchedstatus:true] & Conditions: [{StatusUpdate True 0 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] Oct 23 01:31:27.170: INFO: Service test-service-bzqzf has service status updated STEP: patching the service STEP: watching for the Service to be patched Oct 23 01:31:27.176: INFO: observed Service test-service-bzqzf in namespace services-9560 with labels: map[test-service-static:true] Oct 23 01:31:27.176: INFO: observed Service test-service-bzqzf in namespace services-9560 with labels: map[test-service-static:true] Oct 23 01:31:27.176: INFO: observed Service test-service-bzqzf in namespace services-9560 with labels: map[test-service-static:true] Oct 23 01:31:27.176: INFO: Found Service test-service-bzqzf in namespace services-9560 with labels: map[test-service:patched test-service-static:true] Oct 23 01:31:27.176: INFO: Service test-service-bzqzf patched STEP: deleting the service STEP: watching for the Service to be deleted Oct 23 01:31:27.183: INFO: Observed event: ADDED Oct 23 01:31:27.183: INFO: Observed event: MODIFIED Oct 23 01:31:27.183: INFO: Observed event: MODIFIED Oct 23 01:31:27.183: INFO: Observed event: MODIFIED Oct 23 01:31:27.184: INFO: Found Service test-service-bzqzf in namespace services-9560 with labels: map[test-service:patched test-service-static:true] & annotations: map[patchedstatus:true] Oct 23 01:31:27.184: INFO: Service test-service-bzqzf deleted [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:31:27.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9560" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • ------------------------------ {"msg":"PASSED [sig-network] Services should complete a service status lifecycle [Conformance]","total":-1,"completed":9,"skipped":94,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:31:23.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-708867a0-f57e-4e76-b54e-0dd544712763 STEP: Creating a pod to test consume configMaps Oct 23 01:31:23.915: INFO: Waiting up to 5m0s for pod "pod-configmaps-baee50b6-86e7-4213-93d6-e40bd55a5394" in namespace "configmap-7521" to be "Succeeded or Failed" Oct 23 01:31:23.916: INFO: Pod "pod-configmaps-baee50b6-86e7-4213-93d6-e40bd55a5394": Phase="Pending", Reason="", readiness=false. Elapsed: 1.839276ms Oct 23 01:31:25.921: INFO: Pod "pod-configmaps-baee50b6-86e7-4213-93d6-e40bd55a5394": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006070051s Oct 23 01:31:27.924: INFO: Pod "pod-configmaps-baee50b6-86e7-4213-93d6-e40bd55a5394": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009438825s STEP: Saw pod success Oct 23 01:31:27.924: INFO: Pod "pod-configmaps-baee50b6-86e7-4213-93d6-e40bd55a5394" satisfied condition "Succeeded or Failed" Oct 23 01:31:27.927: INFO: Trying to get logs from node node1 pod pod-configmaps-baee50b6-86e7-4213-93d6-e40bd55a5394 container configmap-volume-test: STEP: delete the pod Oct 23 01:31:27.940: INFO: Waiting for pod pod-configmaps-baee50b6-86e7-4213-93d6-e40bd55a5394 to disappear Oct 23 01:31:27.942: INFO: Pod pod-configmaps-baee50b6-86e7-4213-93d6-e40bd55a5394 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:31:27.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7521" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":215,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:31:27.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name secret-emptykey-test-39746e27-a519-4f16-8ca7-661d6ef0a04f [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:31:27.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9152" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]","total":-1,"completed":12,"skipped":224,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:31:16.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1386 STEP: creating an pod Oct 23 01:31:16.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5282 run logs-generator --image=k8s.gcr.io/e2e-test-images/agnhost:2.32 --restart=Never -- logs-generator --log-lines-total 100 --run-duration 20s' Oct 23 01:31:16.966: INFO: stderr: "" Oct 23 01:31:16.966: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Waiting for log generator to start. Oct 23 01:31:16.966: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Oct 23 01:31:16.966: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-5282" to be "running and ready, or succeeded" Oct 23 01:31:16.970: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 3.373132ms Oct 23 01:31:18.975: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008749118s Oct 23 01:31:20.979: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012901845s Oct 23 01:31:22.983: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 6.016064428s Oct 23 01:31:22.983: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Oct 23 01:31:22.983: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Oct 23 01:31:22.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5282 logs logs-generator logs-generator' Oct 23 01:31:23.149: INFO: stderr: "" Oct 23 01:31:23.149: INFO: stdout: "I1023 01:31:20.870161 1 logs_generator.go:76] 0 GET /api/v1/namespaces/default/pods/t4v 303\nI1023 01:31:21.071116 1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/7nx 512\nI1023 01:31:21.270261 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/ns/pods/7xsr 262\nI1023 01:31:21.470471 1 logs_generator.go:76] 3 POST /api/v1/namespaces/default/pods/94nr 324\nI1023 01:31:21.670801 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/ns/pods/6sq 548\nI1023 01:31:21.871165 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/ns/pods/8h4 211\nI1023 01:31:22.070374 1 logs_generator.go:76] 6 POST /api/v1/namespaces/ns/pods/mfh 477\nI1023 01:31:22.270797 1 logs_generator.go:76] 7 GET /api/v1/namespaces/kube-system/pods/sfv9 437\nI1023 01:31:22.471144 1 logs_generator.go:76] 8 GET /api/v1/namespaces/default/pods/ncs 559\nI1023 01:31:22.670314 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/ns/pods/m4d 593\nI1023 01:31:22.870689 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/kube-system/pods/6kr 416\nI1023 01:31:23.071060 1 logs_generator.go:76] 11 POST /api/v1/namespaces/default/pods/qhl 439\n" STEP: limiting log lines Oct 23 01:31:23.149: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5282 logs logs-generator logs-generator --tail=1' Oct 23 01:31:23.316: INFO: stderr: "" Oct 23 01:31:23.316: INFO: stdout: "I1023 01:31:23.270301 1 logs_generator.go:76] 12 POST /api/v1/namespaces/ns/pods/8xg 217\n" Oct 23 01:31:23.316: INFO: got output "I1023 01:31:23.270301 1 logs_generator.go:76] 12 POST /api/v1/namespaces/ns/pods/8xg 217\n" STEP: limiting log bytes Oct 23 01:31:23.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5282 logs logs-generator logs-generator --limit-bytes=1' Oct 23 01:31:23.463: INFO: stderr: "" Oct 23 01:31:23.463: INFO: stdout: "I" Oct 23 01:31:23.463: INFO: got output "I" STEP: exposing timestamps Oct 23 01:31:23.463: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5282 logs logs-generator logs-generator --tail=1 --timestamps' Oct 23 01:31:23.630: INFO: stderr: "" Oct 23 01:31:23.630: INFO: stdout: "2021-10-23T01:31:23.478324415Z I1023 01:31:23.470612 1 logs_generator.go:76] 13 POST /api/v1/namespaces/default/pods/tnp 583\n" Oct 23 01:31:23.630: INFO: got output "2021-10-23T01:31:23.478324415Z I1023 01:31:23.470612 1 logs_generator.go:76] 13 POST /api/v1/namespaces/default/pods/tnp 583\n" STEP: restricting to a time range Oct 23 01:31:26.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5282 logs logs-generator logs-generator --since=1s' Oct 23 01:31:26.295: INFO: stderr: "" Oct 23 01:31:26.295: INFO: stdout: "I1023 01:31:25.470809 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/default/pods/4t2 327\nI1023 01:31:25.671187 1 logs_generator.go:76] 24 POST /api/v1/namespaces/ns/pods/htq7 431\nI1023 01:31:25.870848 1 logs_generator.go:76] 25 POST /api/v1/namespaces/kube-system/pods/4cxb 385\nI1023 01:31:26.070189 1 logs_generator.go:76] 26 PUT /api/v1/namespaces/kube-system/pods/62xf 434\nI1023 01:31:26.270601 1 logs_generator.go:76] 27 POST /api/v1/namespaces/ns/pods/bwt 414\n" Oct 23 01:31:26.295: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5282 logs logs-generator logs-generator --since=24h' Oct 23 01:31:26.504: INFO: stderr: "" Oct 23 01:31:26.504: INFO: stdout: "I1023 01:31:20.870161 1 logs_generator.go:76] 0 GET /api/v1/namespaces/default/pods/t4v 303\nI1023 01:31:21.071116 1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/7nx 512\nI1023 01:31:21.270261 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/ns/pods/7xsr 262\nI1023 01:31:21.470471 1 logs_generator.go:76] 3 POST /api/v1/namespaces/default/pods/94nr 324\nI1023 01:31:21.670801 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/ns/pods/6sq 548\nI1023 01:31:21.871165 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/ns/pods/8h4 211\nI1023 01:31:22.070374 1 logs_generator.go:76] 6 POST /api/v1/namespaces/ns/pods/mfh 477\nI1023 01:31:22.270797 1 logs_generator.go:76] 7 GET /api/v1/namespaces/kube-system/pods/sfv9 437\nI1023 01:31:22.471144 1 logs_generator.go:76] 8 GET /api/v1/namespaces/default/pods/ncs 559\nI1023 01:31:22.670314 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/ns/pods/m4d 593\nI1023 01:31:22.870689 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/kube-system/pods/6kr 416\nI1023 01:31:23.071060 1 logs_generator.go:76] 11 POST /api/v1/namespaces/default/pods/qhl 439\nI1023 01:31:23.270301 1 logs_generator.go:76] 12 POST /api/v1/namespaces/ns/pods/8xg 217\nI1023 01:31:23.470612 1 logs_generator.go:76] 13 POST /api/v1/namespaces/default/pods/tnp 583\nI1023 01:31:23.671027 1 logs_generator.go:76] 14 GET /api/v1/namespaces/default/pods/shwd 428\nI1023 01:31:23.870200 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/default/pods/k45 526\nI1023 01:31:24.070609 1 logs_generator.go:76] 16 GET /api/v1/namespaces/kube-system/pods/jg4 525\nI1023 01:31:24.270919 1 logs_generator.go:76] 17 GET /api/v1/namespaces/default/pods/77f5 321\nI1023 01:31:24.471201 1 logs_generator.go:76] 18 GET /api/v1/namespaces/ns/pods/4t8 507\nI1023 01:31:24.670535 1 logs_generator.go:76] 19 POST /api/v1/namespaces/default/pods/t29b 385\nI1023 01:31:24.870870 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/kube-system/pods/mbdm 217\nI1023 01:31:25.071169 1 logs_generator.go:76] 21 GET /api/v1/namespaces/ns/pods/m9bn 362\nI1023 01:31:25.270381 1 logs_generator.go:76] 22 POST /api/v1/namespaces/default/pods/zvk 497\nI1023 01:31:25.470809 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/default/pods/4t2 327\nI1023 01:31:25.671187 1 logs_generator.go:76] 24 POST /api/v1/namespaces/ns/pods/htq7 431\nI1023 01:31:25.870848 1 logs_generator.go:76] 25 POST /api/v1/namespaces/kube-system/pods/4cxb 385\nI1023 01:31:26.070189 1 logs_generator.go:76] 26 PUT /api/v1/namespaces/kube-system/pods/62xf 434\nI1023 01:31:26.270601 1 logs_generator.go:76] 27 POST /api/v1/namespaces/ns/pods/bwt 414\nI1023 01:31:26.470992 1 logs_generator.go:76] 28 GET /api/v1/namespaces/kube-system/pods/2qg7 216\n" [AfterEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1391 Oct 23 01:31:26.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5282 delete pod logs-generator' Oct 23 01:31:33.882: INFO: stderr: "" Oct 23 01:31:33.882: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:31:33.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5282" for this suite. • [SLOW TEST:17.093 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1383 should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":-1,"completed":12,"skipped":231,"failed":0} S ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:31:28.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars Oct 23 01:31:28.103: INFO: Waiting up to 5m0s for pod "downward-api-35770b91-2851-4391-96a0-089bfdd1e878" in namespace "downward-api-9966" to be "Succeeded or Failed" Oct 23 01:31:28.105: INFO: Pod "downward-api-35770b91-2851-4391-96a0-089bfdd1e878": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036305ms Oct 23 01:31:30.109: INFO: Pod "downward-api-35770b91-2851-4391-96a0-089bfdd1e878": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006350204s Oct 23 01:31:32.113: INFO: Pod "downward-api-35770b91-2851-4391-96a0-089bfdd1e878": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010699216s Oct 23 01:31:34.117: INFO: Pod "downward-api-35770b91-2851-4391-96a0-089bfdd1e878": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014726797s STEP: Saw pod success Oct 23 01:31:34.117: INFO: Pod "downward-api-35770b91-2851-4391-96a0-089bfdd1e878" satisfied condition "Succeeded or Failed" Oct 23 01:31:34.121: INFO: Trying to get logs from node node2 pod downward-api-35770b91-2851-4391-96a0-089bfdd1e878 container dapi-container: STEP: delete the pod Oct 23 01:31:34.133: INFO: Waiting for pod downward-api-35770b91-2851-4391-96a0-089bfdd1e878 to disappear Oct 23 01:31:34.134: INFO: Pod downward-api-35770b91-2851-4391-96a0-089bfdd1e878 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:31:34.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9966" for this suite. • [SLOW TEST:6.072 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":253,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:30:56.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-2772 STEP: creating service affinity-clusterip in namespace services-2772 STEP: creating replication controller affinity-clusterip in namespace services-2772 I1023 01:30:57.003879 29 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-2772, replica count: 3 I1023 01:31:00.054809 29 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1023 01:31:03.055624 29 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 23 01:31:03.061: INFO: Creating new exec pod Oct 23 01:31:16.080: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2772 exec execpod-affinity87ch6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Oct 23 01:31:16.321: INFO: stderr: "+ nc -v -t -w 2 affinity-clusterip 80\n+ echo hostName\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Oct 23 01:31:16.321: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Oct 23 01:31:16.321: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2772 exec execpod-affinity87ch6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.32.4 80' Oct 23 01:31:16.573: INFO: stderr: "+ nc -v -t -w 2 10.233.32.4 80\n+ echo hostName\nConnection to 10.233.32.4 80 port [tcp/http] succeeded!\n" Oct 23 01:31:16.573: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Oct 23 01:31:16.573: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2772 exec execpod-affinity87ch6 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.233.32.4:80/ ; done' Oct 23 01:31:16.858: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.32.4:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.32.4:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.32.4:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.32.4:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.32.4:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.32.4:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.32.4:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.32.4:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.32.4:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.32.4:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.32.4:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.32.4:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.32.4:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.32.4:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.32.4:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.32.4:80/\n" Oct 23 01:31:16.858: INFO: stdout: "\naffinity-clusterip-xb28c\naffinity-clusterip-xb28c\naffinity-clusterip-xb28c\naffinity-clusterip-xb28c\naffinity-clusterip-xb28c\naffinity-clusterip-xb28c\naffinity-clusterip-xb28c\naffinity-clusterip-xb28c\naffinity-clusterip-xb28c\naffinity-clusterip-xb28c\naffinity-clusterip-xb28c\naffinity-clusterip-xb28c\naffinity-clusterip-xb28c\naffinity-clusterip-xb28c\naffinity-clusterip-xb28c\naffinity-clusterip-xb28c" Oct 23 01:31:16.858: INFO: Received response from host: affinity-clusterip-xb28c Oct 23 01:31:16.858: INFO: Received response from host: affinity-clusterip-xb28c Oct 23 01:31:16.858: INFO: Received response from host: affinity-clusterip-xb28c Oct 23 01:31:16.858: INFO: Received response from host: affinity-clusterip-xb28c Oct 23 01:31:16.858: INFO: Received response from host: affinity-clusterip-xb28c Oct 23 01:31:16.858: INFO: Received response from host: affinity-clusterip-xb28c Oct 23 01:31:16.858: INFO: Received response from host: affinity-clusterip-xb28c Oct 23 01:31:16.858: INFO: Received response from host: affinity-clusterip-xb28c Oct 23 01:31:16.858: INFO: Received response from host: affinity-clusterip-xb28c Oct 23 01:31:16.858: INFO: Received response from host: affinity-clusterip-xb28c Oct 23 01:31:16.858: INFO: Received response from host: affinity-clusterip-xb28c Oct 23 01:31:16.858: INFO: Received response from host: affinity-clusterip-xb28c Oct 23 01:31:16.858: INFO: Received response from host: affinity-clusterip-xb28c Oct 23 01:31:16.858: INFO: Received response from host: affinity-clusterip-xb28c Oct 23 01:31:16.858: INFO: Received response from host: affinity-clusterip-xb28c Oct 23 01:31:16.858: INFO: Received response from host: affinity-clusterip-xb28c Oct 23 01:31:16.858: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip in namespace services-2772, will wait for the garbage collector to delete the pods Oct 23 01:31:16.926: INFO: Deleting ReplicationController affinity-clusterip took: 4.78845ms Oct 23 01:31:17.027: INFO: Terminating ReplicationController affinity-clusterip pods took: 100.984421ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:31:35.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2772" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:38.083 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":9,"skipped":83,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:31:35.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 01:31:35.139: INFO: Creating deployment "webserver-deployment" Oct 23 01:31:35.143: INFO: Waiting for observed generation 1 Oct 23 01:31:37.148: INFO: Waiting for all required pods to come up Oct 23 01:31:37.151: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Oct 23 01:31:45.158: INFO: Waiting for deployment "webserver-deployment" to complete Oct 23 01:31:45.163: INFO: Updating deployment "webserver-deployment" with a non-existent image Oct 23 01:31:45.171: INFO: Updating deployment webserver-deployment Oct 23 01:31:45.171: INFO: Waiting for observed generation 2 Oct 23 01:31:47.178: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Oct 23 01:31:47.181: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Oct 23 01:31:47.183: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Oct 23 01:31:47.191: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Oct 23 01:31:47.192: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Oct 23 01:31:47.194: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Oct 23 01:31:47.200: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Oct 23 01:31:47.200: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Oct 23 01:31:47.209: INFO: Updating deployment webserver-deployment Oct 23 01:31:47.209: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Oct 23 01:31:47.213: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Oct 23 01:31:47.215: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Oct 23 01:31:47.220: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-1362 bd42eeb1-da84-441e-8257-011c539f9fc8 90710 3 2021-10-23 01:31:35 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-10-23 01:31:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-10-23 01:31:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc000e35c38 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:5,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-10-23 01:31:42 +0000 UTC,LastTransitionTime:2021-10-23 01:31:42 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-795d758f88" is progressing.,LastUpdateTime:2021-10-23 01:31:45 +0000 UTC,LastTransitionTime:2021-10-23 01:31:35 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Oct 23 01:31:47.223: INFO: New ReplicaSet "webserver-deployment-795d758f88" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-795d758f88 deployment-1362 0d187747-32af-4876-a4c2-8152fdc1bf30 90713 3 2021-10-23 01:31:45 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment bd42eeb1-da84-441e-8257-011c539f9fc8 0xc0045fc067 0xc0045fc068}] [] [{kube-controller-manager Update apps/v1 2021-10-23 01:31:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bd42eeb1-da84-441e-8257-011c539f9fc8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 795d758f88,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0045fc1c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Oct 23 01:31:47.223: INFO: All old ReplicaSets of Deployment "webserver-deployment": Oct 23 01:31:47.223: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-847dcfb7fb deployment-1362 70fbe61a-2bdd-472a-b558-bf200550da40 90711 3 2021-10-23 01:31:35 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment bd42eeb1-da84-441e-8257-011c539f9fc8 0xc0045fc227 0xc0045fc228}] [] [{kube-controller-manager Update apps/v1 2021-10-23 01:31:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bd42eeb1-da84-441e-8257-011c539f9fc8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 847dcfb7fb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0045fc298 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Oct 23 01:31:47.228: INFO: Pod "webserver-deployment-795d758f88-gxdpd" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-gxdpd webserver-deployment-795d758f88- deployment-1362 722b2dc3-666e-459f-a0a3-5b63f4b8662f 90717 0 2021-10-23 01:31:47 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 0d187747-32af-4876-a4c2-8152fdc1bf30 0xc0040f00af 0xc0040f00c0}] [] [{kube-controller-manager Update v1 2021-10-23 01:31:47 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0d187747-32af-4876-a4c2-8152fdc1bf30\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-nghjd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nghjd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 23 01:31:47.228: INFO: Pod "webserver-deployment-795d758f88-jgqqn" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-jgqqn webserver-deployment-795d758f88- deployment-1362 7f972103-599b-4b04-8ac1-9bc893d7c51b 90707 0 2021-10-23 01:31:45 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.52" ], "mac": "b6:33:6b:f8:d9:25", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.52" ], "mac": "b6:33:6b:f8:d9:25", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 0d187747-32af-4876-a4c2-8152fdc1bf30 0xc0040f01ff 0xc0040f0210}] [] [{kube-controller-manager Update v1 2021-10-23 01:31:45 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0d187747-32af-4876-a4c2-8152fdc1bf30\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-10-23 01:31:45 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:hostIP":{},"f:startTime":{}}}} {multus Update v1 2021-10-23 01:31:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}},"f:status":{"f:containerStatuses":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-c7w86,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c7w86,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:31:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:31:45 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:31:45 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:31:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:,StartTime:2021-10-23 01:31:45 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:nil,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 23 01:31:47.228: INFO: Pod "webserver-deployment-795d758f88-lptd5" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-lptd5 webserver-deployment-795d758f88- deployment-1362 350757d3-a830-4fac-8c65-51f209dcd5cd 90686 0 2021-10-23 01:31:45 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 0d187747-32af-4876-a4c2-8152fdc1bf30 0xc0040f03ff 0xc0040f0410}] [] [{kube-controller-manager Update v1 2021-10-23 01:31:45 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0d187747-32af-4876-a4c2-8152fdc1bf30\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-10-23 01:31:45 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-5cxnp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5cxnp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:31:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:31:45 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:31:45 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:31:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:,StartTime:2021-10-23 01:31:45 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 23 01:31:47.229: INFO: Pod "webserver-deployment-795d758f88-mpt5p" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-mpt5p webserver-deployment-795d758f88- deployment-1362 142edc5a-c9ce-44e2-98b9-3f46db957975 90703 0 2021-10-23 01:31:45 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 0d187747-32af-4876-a4c2-8152fdc1bf30 0xc0040f05df 0xc0040f05f0}] [] [{kube-controller-manager Update v1 2021-10-23 01:31:45 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0d187747-32af-4876-a4c2-8152fdc1bf30\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-10-23 01:31:45 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-4d7n4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4d7n4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:31:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:31:45 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:31:45 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:31:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:,StartTime:2021-10-23 01:31:45 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 23 01:31:47.229: INFO: Pod "webserver-deployment-795d758f88-n2qtl" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-n2qtl webserver-deployment-795d758f88- deployment-1362 186c036a-e179-4cd6-ac5e-fcbd80733f25 90697 0 2021-10-23 01:31:45 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 0d187747-32af-4876-a4c2-8152fdc1bf30 0xc0040f07bf 0xc0040f07d0}] [] [{kube-controller-manager Update v1 2021-10-23 01:31:45 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0d187747-32af-4876-a4c2-8152fdc1bf30\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-10-23 01:31:45 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-4svnj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4svnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:31:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:31:45 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:31:45 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:31:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:,StartTime:2021-10-23 01:31:45 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 23 01:31:47.229: INFO: Pod "webserver-deployment-795d758f88-n6kbx" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-n6kbx webserver-deployment-795d758f88- deployment-1362 e77c0004-79c8-48a9-9f17-14afa7f6eff6 90704 0 2021-10-23 01:31:45 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 0d187747-32af-4876-a4c2-8152fdc1bf30 0xc0040f099f 0xc0040f09b0}] [] [{kube-controller-manager Update v1 2021-10-23 01:31:45 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0d187747-32af-4876-a4c2-8152fdc1bf30\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-10-23 01:31:45 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-4sjc6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4sjc6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:31:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:31:45 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:31:45 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:31:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:,StartTime:2021-10-23 01:31:45 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 23 01:31:47.230: INFO: Pod "webserver-deployment-847dcfb7fb-6bs2n" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-6bs2n webserver-deployment-847dcfb7fb- deployment-1362 329bf8d7-7f44-41db-bb9e-7846a84f095f 90572 0 2021-10-23 01:31:35 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.50" ], "mac": "46:ef:74:16:96:a5", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.50" ], "mac": "46:ef:74:16:96:a5", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 70fbe61a-2bdd-472a-b558-bf200550da40 0xc0040f0b7f 0xc0040f0b90}] [] [{kube-controller-manager Update v1 2021-10-23 01:31:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"70fbe61a-2bdd-472a-b558-bf200550da40\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-10-23 01:31:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-10-23 01:31:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.50\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-p47q8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p47q8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:31:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:31:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:31:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:31:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.50,StartTime:2021-10-23 01:31:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-23 01:31:41 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://e25750455ff6aa16b4f5381ee84505dd85890043721bedce770ca127ea19b609,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.50,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 23 01:31:47.230: INFO: Pod "webserver-deployment-847dcfb7fb-9cqc8" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-9cqc8 webserver-deployment-847dcfb7fb- deployment-1362 90c5da23-7a06-466d-a556-9e70ab3545eb 90613 0 2021-10-23 01:31:35 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.214" ], "mac": "36:f6:a5:28:b3:5f", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.214" ], "mac": "36:f6:a5:28:b3:5f", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 70fbe61a-2bdd-472a-b558-bf200550da40 0xc0040f0d7f 0xc0040f0d90}] [] [{kube-controller-manager Update v1 2021-10-23 01:31:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"70fbe61a-2bdd-472a-b558-bf200550da40\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-10-23 01:31:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-10-23 01:31:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.214\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-tdwpj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tdwpj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:31:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:31:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:31:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:31:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.214,StartTime:2021-10-23 01:31:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-23 01:31:41 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://1c7412c7c7c84a0a0696e899c221c6b160ae35c9591e1ec9abe11174e923f649,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.214,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 23 01:31:47.230: INFO: Pod "webserver-deployment-847dcfb7fb-d8rb2" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-d8rb2 webserver-deployment-847dcfb7fb- deployment-1362 5ce5d86a-bb76-4a45-b9b9-065dd14416df 90619 0 2021-10-23 01:31:35 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.217" ], "mac": "9e:c4:f3:0f:58:17", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.217" ], "mac": "9e:c4:f3:0f:58:17", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 70fbe61a-2bdd-472a-b558-bf200550da40 0xc0040f0f9f 0xc0040f0fc0}] [] [{kube-controller-manager Update v1 2021-10-23 01:31:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"70fbe61a-2bdd-472a-b558-bf200550da40\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-10-23 01:31:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-10-23 01:31:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.217\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-j9894,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j9894,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:31:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:31:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:31:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:31:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.217,StartTime:2021-10-23 01:31:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-23 01:31:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://993a5dd9e90b116492efc4389a192417a3c128afd9fa9e2faa258e937005832f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.217,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 23 01:31:47.231: INFO: Pod "webserver-deployment-847dcfb7fb-h474s" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-h474s webserver-deployment-847dcfb7fb- deployment-1362 8d80b12b-8467-4880-8b45-548165909f61 90585 0 2021-10-23 01:31:35 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.49" ], "mac": "32:e2:bf:e4:5f:f3", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.49" ], "mac": "32:e2:bf:e4:5f:f3", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 70fbe61a-2bdd-472a-b558-bf200550da40 0xc0040f11bf 0xc0040f11d0}] [] [{kube-controller-manager Update v1 2021-10-23 01:31:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"70fbe61a-2bdd-472a-b558-bf200550da40\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-10-23 01:31:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-10-23 01:31:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.49\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-7g7m7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7g7m7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:31:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:31:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:31:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:31:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.49,StartTime:2021-10-23 01:31:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-23 01:31:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://b75e523ef8cfbe26188598819e134cd630f8c80e380ae15c2725fb68c6652454,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.49,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 23 01:31:47.231: INFO: Pod "webserver-deployment-847dcfb7fb-mw6l5" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-mw6l5 webserver-deployment-847dcfb7fb- deployment-1362 51a3789f-2af7-49e2-b371-885bcb024eb6 90622 0 2021-10-23 01:31:35 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.216" ], "mac": "be:61:f5:6c:bc:53", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.216" ], "mac": "be:61:f5:6c:bc:53", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 70fbe61a-2bdd-472a-b558-bf200550da40 0xc0040f141f 0xc0040f1430}] [] [{kube-controller-manager Update v1 2021-10-23 01:31:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"70fbe61a-2bdd-472a-b558-bf200550da40\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-10-23 01:31:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-10-23 01:31:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.216\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-hzbqd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hzbqd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:31:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:31:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:31:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:31:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.216,StartTime:2021-10-23 01:31:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-23 01:31:41 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://e105048215b303edad4ea94440ef7bd3e16b04391adf9c67f030c870b0c01584,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.216,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 23 01:31:47.231: INFO: Pod "webserver-deployment-847dcfb7fb-q9l2j" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-q9l2j webserver-deployment-847dcfb7fb- deployment-1362 4ef61210-4191-4ce4-bf19-3546940a5da5 90718 0 2021-10-23 01:31:47 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 70fbe61a-2bdd-472a-b558-bf200550da40 0xc0040f165f 0xc0040f1670}] [] [{kube-controller-manager Update v1 2021-10-23 01:31:47 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"70fbe61a-2bdd-472a-b558-bf200550da40\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-k7v49,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k7v49,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:31:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 23 01:31:47.232: INFO: Pod "webserver-deployment-847dcfb7fb-qd5xl" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-qd5xl webserver-deployment-847dcfb7fb- deployment-1362 3c98e876-93c1-4786-ac33-16f51b5d4aaa 90569 0 2021-10-23 01:31:35 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.48" ], "mac": "a2:ab:09:13:46:4c", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.48" ], "mac": "a2:ab:09:13:46:4c", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 70fbe61a-2bdd-472a-b558-bf200550da40 0xc0040f17ef 0xc0040f1800}] [] [{kube-controller-manager Update v1 2021-10-23 01:31:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"70fbe61a-2bdd-472a-b558-bf200550da40\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-10-23 01:31:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-10-23 01:31:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.48\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-qtpcj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qtpcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:31:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:31:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:31:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:31:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.48,StartTime:2021-10-23 01:31:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-23 01:31:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://1d78f95e0e233ed17583e1c1dbb5f8ae533bcd1d5adb101b20190d63b3f1ff8a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.48,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 23 01:31:47.232: INFO: Pod "webserver-deployment-847dcfb7fb-r4qfb" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-r4qfb webserver-deployment-847dcfb7fb- deployment-1362 352d2e30-5845-4a83-bf9b-bf4ed8e2b06f 90625 0 2021-10-23 01:31:35 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.215" ], "mac": "9e:8f:82:c0:e9:be", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.215" ], "mac": "9e:8f:82:c0:e9:be", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 70fbe61a-2bdd-472a-b558-bf200550da40 0xc0040f1a0f 0xc0040f1a20}] [] [{kube-controller-manager Update v1 2021-10-23 01:31:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"70fbe61a-2bdd-472a-b558-bf200550da40\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-10-23 01:31:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-10-23 01:31:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.215\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-k2lbh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k2lbh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:31:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:31:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:31:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:31:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.215,StartTime:2021-10-23 01:31:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-23 01:31:41 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://68213fb0e38b82483d9ddfc14e22a917efcd9db9dc2cb75e21380d0b18c9b807,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.215,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 23 01:31:47.232: INFO: Pod "webserver-deployment-847dcfb7fb-r6jbv" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-r6jbv webserver-deployment-847dcfb7fb- deployment-1362 18d693e8-c58a-4cb0-99ae-a91341b6e60b 90610 0 2021-10-23 01:31:35 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.51" ], "mac": "36:ea:d5:2b:ef:92", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.51" ], "mac": "36:ea:d5:2b:ef:92", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 70fbe61a-2bdd-472a-b558-bf200550da40 0xc0040f1c0f 0xc0040f1c20}] [] [{kube-controller-manager Update v1 2021-10-23 01:31:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"70fbe61a-2bdd-472a-b558-bf200550da40\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-10-23 01:31:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-10-23 01:31:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.51\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-vs9ff,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vs9ff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:31:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:31:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:31:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:31:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.51,StartTime:2021-10-23 01:31:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-23 01:31:41 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://1599eb7fb78c58d162a715cf0bfaa17af003604318880972c49e1d3f77a61af3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.51,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:31:47.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1362" for this suite. • [SLOW TEST:12.125 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:29:24.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a service externalname-service with the type=ExternalName in namespace services-1223 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-1223 I1023 01:29:24.799409 37 runners.go:190] Created replication controller with name: externalname-service, namespace: services-1223, replica count: 2 I1023 01:29:27.851163 37 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1023 01:29:30.852252 37 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1023 01:29:33.852755 37 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 23 01:29:33.852: INFO: Creating new exec pod Oct 23 01:29:40.879: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:29:41.593: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:29:41.593: INFO: stdout: "" Oct 23 01:29:42.595: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:29:42.934: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:29:42.934: INFO: stdout: "" Oct 23 01:29:43.595: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:29:43.857: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:29:43.857: INFO: stdout: "externalname-service-nvk4w" Oct 23 01:29:43.857: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.48.247 80' Oct 23 01:29:44.497: INFO: stderr: "+ nc -v -t -w 2 10.233.48.247 80\n+ echo hostName\nConnection to 10.233.48.247 80 port [tcp/http] succeeded!\n" Oct 23 01:29:44.497: INFO: stdout: "externalname-service-nvk4w" Oct 23 01:29:44.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:29:44.729: INFO: rc: 1 Oct 23 01:29:44.729: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:29:45.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:29:46.117: INFO: rc: 1 Oct 23 01:29:46.117: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:29:46.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:29:47.171: INFO: rc: 1 Oct 23 01:29:47.171: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:29:47.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:29:48.107: INFO: rc: 1 Oct 23 01:29:48.107: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:29:48.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:29:49.948: INFO: rc: 1 Oct 23 01:29:49.948: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:29:50.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:29:50.952: INFO: rc: 1 Oct 23 01:29:50.952: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:29:51.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:29:52.323: INFO: rc: 1 Oct 23 01:29:52.323: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:29:52.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:29:52.964: INFO: rc: 1 Oct 23 01:29:52.964: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:29:53.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:29:54.021: INFO: rc: 1 Oct 23 01:29:54.021: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:29:54.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:29:54.972: INFO: rc: 1 Oct 23 01:29:54.972: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:29:55.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:29:55.980: INFO: rc: 1 Oct 23 01:29:55.980: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:29:56.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:29:56.968: INFO: rc: 1 Oct 23 01:29:56.968: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:29:57.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:29:57.961: INFO: rc: 1 Oct 23 01:29:57.961: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:29:58.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:29:58.969: INFO: rc: 1 Oct 23 01:29:58.970: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:29:59.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:29:59.977: INFO: rc: 1 Oct 23 01:29:59.977: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:30:00.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:30:00.978: INFO: rc: 1 Oct 23 01:30:00.978: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:30:01.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:30:01.940: INFO: rc: 1 Oct 23 01:30:01.940: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:30:02.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:30:02.972: INFO: rc: 1 Oct 23 01:30:02.973: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:30:03.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:30:03.969: INFO: rc: 1 Oct 23 01:30:03.969: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:30:04.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:30:04.977: INFO: rc: 1 Oct 23 01:30:04.977: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:30:05.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:30:05.954: INFO: rc: 1 Oct 23 01:30:05.954: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:30:06.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:30:06.993: INFO: rc: 1 Oct 23 01:30:06.993: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:30:07.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:30:07.980: INFO: rc: 1 Oct 23 01:30:07.980: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:30:08.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:30:09.534: INFO: rc: 1 Oct 23 01:30:09.534: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:30:09.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:30:10.075: INFO: rc: 1 Oct 23 01:30:10.075: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:30:10.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:30:10.969: INFO: rc: 1 Oct 23 01:30:10.969: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:30:11.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:30:12.003: INFO: rc: 1 Oct 23 01:30:12.003: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:30:12.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:30:12.987: INFO: rc: 1 Oct 23 01:30:12.987: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:30:13.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:30:13.988: INFO: rc: 1 Oct 23 01:30:13.989: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:30:14.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:30:15.161: INFO: rc: 1 Oct 23 01:30:15.161: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:30:15.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:30:16.390: INFO: rc: 1 Oct 23 01:30:16.390: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:30:16.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:30:17.227: INFO: rc: 1 Oct 23 01:30:17.227: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:30:17.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:30:18.157: INFO: rc: 1 Oct 23 01:30:18.157: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:30:18.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:30:20.077: INFO: rc: 1 Oct 23 01:30:20.078: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:30:20.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:30:21.110: INFO: rc: 1 Oct 23 01:30:21.110: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:30:21.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:30:22.059: INFO: rc: 1 Oct 23 01:30:22.059: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:30:22.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:30:23.266: INFO: rc: 1 Oct 23 01:30:23.266: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:30:23.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:30:24.508: INFO: rc: 1 Oct 23 01:30:24.508: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:30:24.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:30:25.342: INFO: rc: 1 Oct 23 01:30:25.342: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:30:25.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:30:25.979: INFO: rc: 1 Oct 23 01:30:25.979: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:30:26.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:30:27.143: INFO: rc: 1 Oct 23 01:30:27.143: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:30:27.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:30:27.956: INFO: rc: 1 Oct 23 01:30:27.956: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:30:28.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:30:28.960: INFO: rc: 1 Oct 23 01:30:28.960: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:30:29.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:30:29.971: INFO: rc: 1 Oct 23 01:30:29.971: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:30:30.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:30:30.952: INFO: rc: 1 Oct 23 01:30:30.952: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:30:31.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:30:31.990: INFO: rc: 1 Oct 23 01:30:31.990: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:30:32.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:30:33.003: INFO: rc: 1 Oct 23 01:30:33.003: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:30:33.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:30:33.982: INFO: rc: 1 Oct 23 01:30:33.982: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:30:34.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:30:34.955: INFO: rc: 1 Oct 23 01:30:34.955: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:30:35.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:30:35.975: INFO: rc: 1 Oct 23 01:30:35.975: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:30:36.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:30:36.971: INFO: rc: 1 Oct 23 01:30:36.971: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:30:37.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:30:38.122: INFO: rc: 1 Oct 23 01:30:38.122: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:30:38.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:30:38.958: INFO: rc: 1 Oct 23 01:30:38.958: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:30:39.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:30:39.946: INFO: rc: 1 Oct 23 01:30:39.947: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:30:40.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:30:40.962: INFO: rc: 1 Oct 23 01:30:40.962: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:30:41.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:30:41.971: INFO: rc: 1 Oct 23 01:30:41.971: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:30:42.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:30:42.970: INFO: rc: 1 Oct 23 01:30:42.970: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:30:43.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:30:44.262: INFO: rc: 1 Oct 23 01:30:44.262: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:30:44.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:30:44.979: INFO: rc: 1 Oct 23 01:30:44.979: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:30:45.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:30:45.957: INFO: rc: 1 Oct 23 01:30:45.957: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:30:46.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:30:46.982: INFO: rc: 1 Oct 23 01:30:46.982: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:30:47.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:30:48.048: INFO: rc: 1 Oct 23 01:30:48.048: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:30:48.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:30:49.829: INFO: rc: 1 Oct 23 01:30:49.829: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:30:50.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:30:50.968: INFO: rc: 1 Oct 23 01:30:50.968: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:30:51.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:30:52.001: INFO: rc: 1 Oct 23 01:30:52.001: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:30:52.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:30:52.969: INFO: rc: 1 Oct 23 01:30:52.969: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:30:53.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:30:54.011: INFO: rc: 1 Oct 23 01:30:54.011: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:30:54.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:30:54.963: INFO: rc: 1 Oct 23 01:30:54.963: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:30:55.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:30:55.977: INFO: rc: 1 Oct 23 01:30:55.977: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:30:56.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:30:56.964: INFO: rc: 1 Oct 23 01:30:56.964: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:30:57.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:30:58.226: INFO: rc: 1 Oct 23 01:30:58.226: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:30:58.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:30:59.330: INFO: rc: 1 Oct 23 01:30:59.330: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:30:59.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:30:59.975: INFO: rc: 1 Oct 23 01:30:59.975: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:31:00.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:31:00.983: INFO: rc: 1 Oct 23 01:31:00.983: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:31:01.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:31:02.033: INFO: rc: 1 Oct 23 01:31:02.033: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:31:02.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:31:02.959: INFO: rc: 1 Oct 23 01:31:02.959: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:31:03.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:31:03.969: INFO: rc: 1 Oct 23 01:31:03.969: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:31:04.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:31:04.982: INFO: rc: 1 Oct 23 01:31:04.982: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:31:05.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:31:05.969: INFO: rc: 1 Oct 23 01:31:05.969: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:31:06.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:31:06.963: INFO: rc: 1 Oct 23 01:31:06.964: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:31:07.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:31:07.962: INFO: rc: 1 Oct 23 01:31:07.962: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:31:08.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:31:08.984: INFO: rc: 1 Oct 23 01:31:08.984: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:31:09.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:31:09.976: INFO: rc: 1 Oct 23 01:31:09.976: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:31:10.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:31:10.976: INFO: rc: 1 Oct 23 01:31:10.976: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:31:11.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:31:11.956: INFO: rc: 1 Oct 23 01:31:11.956: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:31:12.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:31:12.964: INFO: rc: 1 Oct 23 01:31:12.964: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:31:13.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:31:13.963: INFO: rc: 1 Oct 23 01:31:13.963: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:31:14.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:31:14.984: INFO: rc: 1 Oct 23 01:31:14.984: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:31:15.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:31:15.960: INFO: rc: 1 Oct 23 01:31:15.960: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:31:16.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:31:17.021: INFO: rc: 1 Oct 23 01:31:17.021: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:31:17.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:31:18.008: INFO: rc: 1 Oct 23 01:31:18.008: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:31:18.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:31:20.120: INFO: rc: 1 Oct 23 01:31:20.120: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:31:20.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:31:21.149: INFO: rc: 1 Oct 23 01:31:21.149: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:31:21.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:31:22.084: INFO: rc: 1 Oct 23 01:31:22.084: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:31:22.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:31:23.032: INFO: rc: 1 Oct 23 01:31:23.032: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:31:23.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:31:23.963: INFO: rc: 1 Oct 23 01:31:23.964: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:31:24.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:31:25.045: INFO: rc: 1 Oct 23 01:31:25.045: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:31:25.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:31:25.959: INFO: rc: 1 Oct 23 01:31:25.959: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:31:26.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:31:27.001: INFO: rc: 1 Oct 23 01:31:27.001: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:31:27.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:31:28.017: INFO: rc: 1 Oct 23 01:31:28.017: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:31:28.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:31:29.458: INFO: rc: 1 Oct 23 01:31:29.459: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:31:29.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:31:30.350: INFO: rc: 1 Oct 23 01:31:30.351: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:31:30.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:31:31.007: INFO: rc: 1 Oct 23 01:31:31.007: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:31:31.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:31:32.197: INFO: rc: 1 Oct 23 01:31:32.197: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:31:32.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:31:33.042: INFO: rc: 1 Oct 23 01:31:33.042: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:31:33.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:31:34.209: INFO: rc: 1 Oct 23 01:31:34.209: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:31:34.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:31:35.020: INFO: rc: 1 Oct 23 01:31:35.020: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:31:35.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:31:36.200: INFO: rc: 1 Oct 23 01:31:36.200: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:31:36.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:31:37.294: INFO: rc: 1 Oct 23 01:31:37.294: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:31:37.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:31:38.333: INFO: rc: 1 Oct 23 01:31:38.333: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:31:38.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:31:40.126: INFO: rc: 1 Oct 23 01:31:40.126: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:31:40.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:31:41.012: INFO: rc: 1 Oct 23 01:31:41.012: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:31:41.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:31:41.979: INFO: rc: 1 Oct 23 01:31:41.979: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:31:42.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:31:42.963: INFO: rc: 1 Oct 23 01:31:42.963: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:31:43.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:31:43.959: INFO: rc: 1 Oct 23 01:31:43.959: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:31:44.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:31:44.964: INFO: rc: 1 Oct 23 01:31:44.964: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:31:44.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619' Oct 23 01:31:45.207: INFO: rc: 1 Oct 23 01:31:45.207: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1223 exec execpodrw456 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32619: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32619 + echo hostName nc: connect to 10.10.190.207 port 32619 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:31:45.208: FAIL: Unexpected error: <*errors.errorString | 0xc003b362e0>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32619 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32619 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func24.15() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1351 +0x358 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0013bb980) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0013bb980) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc0013bb980, 0x70e7b58) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 Oct 23 01:31:45.209: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-1223". STEP: Found 17 events. Oct 23 01:31:45.235: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpodrw456: { } Scheduled: Successfully assigned services-1223/execpodrw456 to node2 Oct 23 01:31:45.235: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for externalname-service-nvk4w: { } Scheduled: Successfully assigned services-1223/externalname-service-nvk4w to node1 Oct 23 01:31:45.235: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for externalname-service-rc8fh: { } Scheduled: Successfully assigned services-1223/externalname-service-rc8fh to node2 Oct 23 01:31:45.235: INFO: At 2021-10-23 01:29:24 +0000 UTC - event for externalname-service: {replication-controller } SuccessfulCreate: Created pod: externalname-service-rc8fh Oct 23 01:31:45.235: INFO: At 2021-10-23 01:29:24 +0000 UTC - event for externalname-service: {replication-controller } SuccessfulCreate: Created pod: externalname-service-nvk4w Oct 23 01:31:45.235: INFO: At 2021-10-23 01:29:27 +0000 UTC - event for externalname-service-nvk4w: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 23 01:31:45.235: INFO: At 2021-10-23 01:29:27 +0000 UTC - event for externalname-service-rc8fh: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 23 01:31:45.235: INFO: At 2021-10-23 01:29:27 +0000 UTC - event for externalname-service-rc8fh: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 484.058802ms Oct 23 01:31:45.235: INFO: At 2021-10-23 01:29:28 +0000 UTC - event for externalname-service-nvk4w: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 289.341419ms Oct 23 01:31:45.235: INFO: At 2021-10-23 01:29:28 +0000 UTC - event for externalname-service-nvk4w: {kubelet node1} Started: Started container externalname-service Oct 23 01:31:45.235: INFO: At 2021-10-23 01:29:28 +0000 UTC - event for externalname-service-nvk4w: {kubelet node1} Created: Created container externalname-service Oct 23 01:31:45.235: INFO: At 2021-10-23 01:29:28 +0000 UTC - event for externalname-service-rc8fh: {kubelet node2} Started: Started container externalname-service Oct 23 01:31:45.235: INFO: At 2021-10-23 01:29:28 +0000 UTC - event for externalname-service-rc8fh: {kubelet node2} Created: Created container externalname-service Oct 23 01:31:45.235: INFO: At 2021-10-23 01:29:36 +0000 UTC - event for execpodrw456: {kubelet node2} Started: Started container agnhost-container Oct 23 01:31:45.235: INFO: At 2021-10-23 01:29:36 +0000 UTC - event for execpodrw456: {kubelet node2} Created: Created container agnhost-container Oct 23 01:31:45.235: INFO: At 2021-10-23 01:29:36 +0000 UTC - event for execpodrw456: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 23 01:31:45.235: INFO: At 2021-10-23 01:29:36 +0000 UTC - event for execpodrw456: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 289.660792ms Oct 23 01:31:45.238: INFO: POD NODE PHASE GRACE CONDITIONS Oct 23 01:31:45.238: INFO: execpodrw456 node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:29:33 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:29:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:29:37 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:29:33 +0000 UTC }] Oct 23 01:31:45.238: INFO: externalname-service-nvk4w node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:29:24 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:29:29 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:29:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:29:24 +0000 UTC }] Oct 23 01:31:45.238: INFO: externalname-service-rc8fh node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:29:24 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:29:29 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:29:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:29:24 +0000 UTC }] Oct 23 01:31:45.238: INFO: Oct 23 01:31:45.242: INFO: Logging node info for node master1 Oct 23 01:31:45.245: INFO: Node Info: &Node{ObjectMeta:{master1 1b0e9b6c-fa73-4303-880f-3c662903b3ba 90459 0 2021-10-22 21:03:37 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-22 21:03:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2021-10-22 21:03:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2021-10-22 21:06:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2021-10-22 21:11:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:07 +0000 UTC,LastTransitionTime:2021-10-22 21:09:07 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 01:31:37 +0000 UTC,LastTransitionTime:2021-10-22 21:03:34 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 01:31:37 +0000 UTC,LastTransitionTime:2021-10-22 21:03:34 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 01:31:37 +0000 UTC,LastTransitionTime:2021-10-22 21:03:34 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 01:31:37 +0000 UTC,LastTransitionTime:2021-10-22 21:09:03 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:30ce143f9c9243b59253027a77cdbf77,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:e78651c4-73ca-42e7-8083-bc7c7ebac4b6,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:519ce66d3ef90d7545f5679b670aa50393adfbe9785a720ba26ce3ec4b263c5d tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:2012644549052fa07c43b0d19f320c871a25e105d0b23e33645e4f1bcf8fcd97 nginx:1.20.1-alpine],SizeBytes:22650454,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 01:31:45.245: INFO: Logging kubelet events for node master1 Oct 23 01:31:45.248: INFO: Logging pods the kubelet thinks is on node master1 Oct 23 01:31:45.280: INFO: kube-multus-ds-amd64-vl8qj started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 01:31:45.280: INFO: Container kube-multus ready: true, restart count 1 Oct 23 01:31:45.280: INFO: coredns-8474476ff8-q8d8x started at 2021-10-22 21:07:01 +0000 UTC (0+1 container statuses recorded) Oct 23 01:31:45.281: INFO: Container coredns ready: true, restart count 2 Oct 23 01:31:45.281: INFO: container-registry-65d7c44b96-wtz5j started at 2021-10-22 21:10:37 +0000 UTC (0+2 container statuses recorded) Oct 23 01:31:45.281: INFO: Container docker-registry ready: true, restart count 0 Oct 23 01:31:45.281: INFO: Container nginx ready: true, restart count 0 Oct 23 01:31:45.281: INFO: node-exporter-fxb7q started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 01:31:45.281: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 01:31:45.281: INFO: Container node-exporter ready: true, restart count 0 Oct 23 01:31:45.281: INFO: kube-apiserver-master1 started at 2021-10-22 21:09:03 +0000 UTC (0+1 container statuses recorded) Oct 23 01:31:45.281: INFO: Container kube-apiserver ready: true, restart count 0 Oct 23 01:31:45.281: INFO: kube-controller-manager-master1 started at 2021-10-22 21:12:54 +0000 UTC (0+1 container statuses recorded) Oct 23 01:31:45.281: INFO: Container kube-controller-manager ready: true, restart count 1 Oct 23 01:31:45.281: INFO: kube-proxy-fhqkt started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 01:31:45.281: INFO: Container kube-proxy ready: true, restart count 1 Oct 23 01:31:45.281: INFO: kube-flannel-8vnf2 started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 01:31:45.281: INFO: Init container install-cni ready: true, restart count 1 Oct 23 01:31:45.281: INFO: Container kube-flannel ready: true, restart count 1 Oct 23 01:31:45.281: INFO: kube-scheduler-master1 started at 2021-10-22 21:22:33 +0000 UTC (0+1 container statuses recorded) Oct 23 01:31:45.281: INFO: Container kube-scheduler ready: true, restart count 0 W1023 01:31:45.294745 37 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 01:31:45.373: INFO: Latency metrics for node master1 Oct 23 01:31:45.373: INFO: Logging node info for node master2 Oct 23 01:31:45.375: INFO: Node Info: &Node{ObjectMeta:{master2 48070097-b11c-473d-9240-f4ee02bd7e2f 90598 0 2021-10-22 21:04:08 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-22 21:04:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-10-22 21:17:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:14 +0000 UTC,LastTransitionTime:2021-10-22 21:09:14 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 01:31:41 +0000 UTC,LastTransitionTime:2021-10-22 21:04:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 01:31:41 +0000 UTC,LastTransitionTime:2021-10-22 21:04:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 01:31:41 +0000 UTC,LastTransitionTime:2021-10-22 21:04:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 01:31:41 +0000 UTC,LastTransitionTime:2021-10-22 21:06:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c5d510cf1060448cb87a1d02cd1f2972,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:8ec7c43d-60d2-4abb-84a1-5a37f3283118,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 01:31:45.376: INFO: Logging kubelet events for node master2 Oct 23 01:31:45.378: INFO: Logging pods the kubelet thinks is on node master2 Oct 23 01:31:45.396: INFO: kube-controller-manager-master2 started at 2021-10-22 21:12:54 +0000 UTC (0+1 container statuses recorded) Oct 23 01:31:45.396: INFO: Container kube-controller-manager ready: true, restart count 2 Oct 23 01:31:45.396: INFO: kube-scheduler-master2 started at 2021-10-22 21:12:54 +0000 UTC (0+1 container statuses recorded) Oct 23 01:31:45.396: INFO: Container kube-scheduler ready: true, restart count 2 Oct 23 01:31:45.396: INFO: kube-proxy-2xlf2 started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 01:31:45.396: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 01:31:45.396: INFO: kube-flannel-tfkj9 started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 01:31:45.396: INFO: Init container install-cni ready: true, restart count 2 Oct 23 01:31:45.396: INFO: Container kube-flannel ready: true, restart count 1 Oct 23 01:31:45.396: INFO: kube-multus-ds-amd64-m8ztc started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 01:31:45.396: INFO: Container kube-multus ready: true, restart count 1 Oct 23 01:31:45.396: INFO: kube-apiserver-master2 started at 2021-10-22 21:04:46 +0000 UTC (0+1 container statuses recorded) Oct 23 01:31:45.396: INFO: Container kube-apiserver ready: true, restart count 0 Oct 23 01:31:45.396: INFO: dns-autoscaler-7df78bfcfb-9ss69 started at 2021-10-22 21:06:58 +0000 UTC (0+1 container statuses recorded) Oct 23 01:31:45.396: INFO: Container autoscaler ready: true, restart count 1 Oct 23 01:31:45.396: INFO: node-exporter-vljkh started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 01:31:45.397: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 01:31:45.397: INFO: Container node-exporter ready: true, restart count 0 W1023 01:31:45.411161 37 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 01:31:45.475: INFO: Latency metrics for node master2 Oct 23 01:31:45.475: INFO: Logging node info for node master3 Oct 23 01:31:45.478: INFO: Node Info: &Node{ObjectMeta:{master3 fe22a467-e2de-4b64-9399-d274e6d13231 90446 0 2021-10-22 21:04:18 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-22 21:04:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-10-22 21:14:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2021-10-22 21:14:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:08 +0000 UTC,LastTransitionTime:2021-10-22 21:09:08 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 01:31:36 +0000 UTC,LastTransitionTime:2021-10-22 21:04:18 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 01:31:36 +0000 UTC,LastTransitionTime:2021-10-22 21:04:18 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 01:31:36 +0000 UTC,LastTransitionTime:2021-10-22 21:04:18 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 01:31:36 +0000 UTC,LastTransitionTime:2021-10-22 21:09:03 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:55ed55d7ecb94c5fbcecb32cb3747801,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:7e00baa8-f631-4d7e-baa1-cb915fbb1ea7,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 01:31:45.479: INFO: Logging kubelet events for node master3 Oct 23 01:31:45.485: INFO: Logging pods the kubelet thinks is on node master3 Oct 23 01:31:45.499: INFO: kube-apiserver-master3 started at 2021-10-22 21:09:03 +0000 UTC (0+1 container statuses recorded) Oct 23 01:31:45.499: INFO: Container kube-apiserver ready: true, restart count 0 Oct 23 01:31:45.499: INFO: kube-controller-manager-master3 started at 2021-10-22 21:09:03 +0000 UTC (0+1 container statuses recorded) Oct 23 01:31:45.499: INFO: Container kube-controller-manager ready: true, restart count 2 Oct 23 01:31:45.499: INFO: kube-proxy-l7st4 started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 01:31:45.499: INFO: Container kube-proxy ready: true, restart count 1 Oct 23 01:31:45.499: INFO: kube-flannel-rf9mv started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 01:31:45.499: INFO: Init container install-cni ready: true, restart count 1 Oct 23 01:31:45.499: INFO: Container kube-flannel ready: true, restart count 1 Oct 23 01:31:45.499: INFO: node-feature-discovery-controller-cff799f9f-dgsfd started at 2021-10-22 21:14:11 +0000 UTC (0+1 container statuses recorded) Oct 23 01:31:45.499: INFO: Container nfd-controller ready: true, restart count 0 Oct 23 01:31:45.499: INFO: node-exporter-b22mw started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 01:31:45.499: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 01:31:45.499: INFO: Container node-exporter ready: true, restart count 0 Oct 23 01:31:45.499: INFO: kube-scheduler-master3 started at 2021-10-22 21:04:46 +0000 UTC (0+1 container statuses recorded) Oct 23 01:31:45.499: INFO: Container kube-scheduler ready: true, restart count 2 Oct 23 01:31:45.499: INFO: kube-multus-ds-amd64-tfbmd started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 01:31:45.499: INFO: Container kube-multus ready: true, restart count 1 Oct 23 01:31:45.499: INFO: coredns-8474476ff8-7wlfp started at 2021-10-22 21:06:56 +0000 UTC (0+1 container statuses recorded) Oct 23 01:31:45.499: INFO: Container coredns ready: true, restart count 2 W1023 01:31:45.517879 37 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 01:31:45.589: INFO: Latency metrics for node master3 Oct 23 01:31:45.589: INFO: Logging node info for node node1 Oct 23 01:31:45.592: INFO: Node Info: &Node{ObjectMeta:{node1 1c590bf6-8845-4681-8fa1-7acc55183d29 90603 0 2021-10-22 21:05:23 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-10-22 21:05:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-10-22 21:05:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-22 21:14:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-22 21:17:46 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-10-23 01:19:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:10 +0000 UTC,LastTransitionTime:2021-10-22 21:09:10 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 01:31:42 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 01:31:42 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 01:31:42 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 01:31:42 +0000 UTC,LastTransitionTime:2021-10-22 21:06:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f11a4b4c58ac4a4eb06ac043edeefa84,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:50e64d70-ffd2-496a-957a-81f1931a6b6e,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003429679,},ContainerImage{Names:[localhost:30500/cmk@sha256:ba2eda55192ece5488254511709b932e8a99f600af8261a9f2a89d0dbc9b8fec cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:c3256608afd18299ac7559d97ec0a80149d265b35d2eeeb33a053826e486886a nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:a296b4c6f6ee2b88f095b61e95c7ef4f51ba25598835b4978c9256d8c8ace48a alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 01:31:45.592: INFO: Logging kubelet events for node node1 Oct 23 01:31:45.594: INFO: Logging pods the kubelet thinks is on node node1 Oct 23 01:31:45.613: INFO: cmk-init-discover-node1-c599w started at 2021-10-22 21:17:43 +0000 UTC (0+3 container statuses recorded) Oct 23 01:31:45.613: INFO: Container discover ready: false, restart count 0 Oct 23 01:31:45.613: INFO: Container init ready: false, restart count 0 Oct 23 01:31:45.613: INFO: Container install ready: false, restart count 0 Oct 23 01:31:45.613: INFO: cmk-t9r2t started at 2021-10-22 21:18:25 +0000 UTC (0+2 container statuses recorded) Oct 23 01:31:45.613: INFO: Container nodereport ready: true, restart count 0 Oct 23 01:31:45.613: INFO: Container reconcile ready: true, restart count 0 Oct 23 01:31:45.613: INFO: externalname-service-nvk4w started at 2021-10-23 01:29:24 +0000 UTC (0+1 container statuses recorded) Oct 23 01:31:45.613: INFO: Container externalname-service ready: true, restart count 0 Oct 23 01:31:45.613: INFO: nginx-proxy-node1 started at 2021-10-22 21:05:23 +0000 UTC (0+1 container statuses recorded) Oct 23 01:31:45.613: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 01:31:45.613: INFO: webserver-deployment-847dcfb7fb-r4qfb started at 2021-10-23 01:31:35 +0000 UTC (0+1 container statuses recorded) Oct 23 01:31:45.613: INFO: Container httpd ready: true, restart count 0 Oct 23 01:31:45.613: INFO: kube-flannel-2cdvd started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 01:31:45.613: INFO: Init container install-cni ready: true, restart count 2 Oct 23 01:31:45.613: INFO: Container kube-flannel ready: true, restart count 3 Oct 23 01:31:45.613: INFO: kubernetes-metrics-scraper-5558854cb-dfn2n started at 2021-10-22 21:07:01 +0000 UTC (0+1 container statuses recorded) Oct 23 01:31:45.613: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 23 01:31:45.613: INFO: node-exporter-v656r started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 01:31:45.613: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 01:31:45.613: INFO: Container node-exporter ready: true, restart count 0 Oct 23 01:31:45.613: INFO: webserver-deployment-847dcfb7fb-mw6l5 started at 2021-10-23 01:31:35 +0000 UTC (0+1 container statuses recorded) Oct 23 01:31:45.613: INFO: Container httpd ready: true, restart count 0 Oct 23 01:31:45.613: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-sjjtd started at 2021-10-22 21:15:26 +0000 UTC (0+1 container statuses recorded) Oct 23 01:31:45.613: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 01:31:45.613: INFO: sample-webhook-deployment-78988fc6cd-p8mpv started at 2021-10-23 01:31:34 +0000 UTC (0+1 container statuses recorded) Oct 23 01:31:45.613: INFO: Container sample-webhook ready: true, restart count 0 Oct 23 01:31:45.613: INFO: webserver-deployment-795d758f88-lptd5 started at 2021-10-23 01:31:45 +0000 UTC (0+1 container statuses recorded) Oct 23 01:31:45.613: INFO: Container httpd ready: false, restart count 0 Oct 23 01:31:45.613: INFO: webserver-deployment-847dcfb7fb-9cqc8 started at 2021-10-23 01:31:35 +0000 UTC (0+1 container statuses recorded) Oct 23 01:31:45.613: INFO: Container httpd ready: true, restart count 0 Oct 23 01:31:45.613: INFO: webserver-deployment-795d758f88-mpt5p started at 2021-10-23 01:31:45 +0000 UTC (0+1 container statuses recorded) Oct 23 01:31:45.613: INFO: Container httpd ready: false, restart count 0 Oct 23 01:31:45.613: INFO: kubernetes-dashboard-785dcbb76d-kc4kh started at 2021-10-22 21:07:01 +0000 UTC (0+1 container statuses recorded) Oct 23 01:31:45.613: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 23 01:31:45.613: INFO: prometheus-k8s-0 started at 2021-10-22 21:19:48 +0000 UTC (0+4 container statuses recorded) Oct 23 01:31:45.613: INFO: Container config-reloader ready: true, restart count 0 Oct 23 01:31:45.613: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 23 01:31:45.613: INFO: Container grafana ready: true, restart count 0 Oct 23 01:31:45.613: INFO: Container prometheus ready: true, restart count 1 Oct 23 01:31:45.613: INFO: collectd-n9sbv started at 2021-10-22 21:23:20 +0000 UTC (0+3 container statuses recorded) Oct 23 01:31:45.613: INFO: Container collectd ready: true, restart count 0 Oct 23 01:31:45.613: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 01:31:45.613: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 01:31:45.613: INFO: webserver-deployment-795d758f88-n2qtl started at 2021-10-23 01:31:45 +0000 UTC (0+1 container statuses recorded) Oct 23 01:31:45.613: INFO: Container httpd ready: false, restart count 0 Oct 23 01:31:45.614: INFO: prometheus-operator-585ccfb458-hwjk2 started at 2021-10-22 21:19:21 +0000 UTC (0+2 container statuses recorded) Oct 23 01:31:45.614: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 01:31:45.614: INFO: Container prometheus-operator ready: true, restart count 0 Oct 23 01:31:45.614: INFO: test-pod started at 2021-10-23 01:29:07 +0000 UTC (0+1 container statuses recorded) Oct 23 01:31:45.614: INFO: Container webserver ready: true, restart count 0 Oct 23 01:31:45.614: INFO: webserver-deployment-847dcfb7fb-d8rb2 started at 2021-10-23 01:31:35 +0000 UTC (0+1 container statuses recorded) Oct 23 01:31:45.614: INFO: Container httpd ready: true, restart count 0 Oct 23 01:31:45.614: INFO: webserver-deployment-795d758f88-n6kbx started at 2021-10-23 01:31:45 +0000 UTC (0+1 container statuses recorded) Oct 23 01:31:45.614: INFO: Container httpd ready: false, restart count 0 Oct 23 01:31:45.614: INFO: replace-27249211-7wtt7 started at 2021-10-23 01:31:00 +0000 UTC (0+1 container statuses recorded) Oct 23 01:31:45.614: INFO: Container c ready: true, restart count 0 Oct 23 01:31:45.614: INFO: kube-proxy-m9z8s started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 01:31:45.614: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 01:31:45.614: INFO: kube-multus-ds-amd64-l97s4 started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 01:31:45.614: INFO: Container kube-multus ready: true, restart count 1 Oct 23 01:31:45.614: INFO: node-feature-discovery-worker-2pvq5 started at 2021-10-22 21:14:11 +0000 UTC (0+1 container statuses recorded) Oct 23 01:31:45.614: INFO: Container nfd-worker ready: true, restart count 0 W1023 01:31:45.630368 37 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 01:31:48.337: INFO: Latency metrics for node node1 Oct 23 01:31:48.337: INFO: Logging node info for node node2 Oct 23 01:31:48.340: INFO: Node Info: &Node{ObjectMeta:{node2 bdba54c1-d4eb-4c09-a343-50f320ccb048 90568 0 2021-10-22 21:05:23 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-10-22 21:05:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-10-22 21:05:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-22 21:14:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-22 21:18:08 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-10-23 01:20:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:example.com/fakecpu":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {e2e.test Update v1 2021-10-23 01:28:00 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:08 +0000 UTC,LastTransitionTime:2021-10-22 21:09:08 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 01:31:41 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 01:31:41 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 01:31:41 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 01:31:41 +0000 UTC,LastTransitionTime:2021-10-22 21:06:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:82312646736a4d47a5e2182417308818,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:045f38e2-ca45-4931-a8ac-a14f5e34cbd2,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[localhost:30500/cmk@sha256:ba2eda55192ece5488254511709b932e8a99f600af8261a9f2a89d0dbc9b8fec localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:c3256608afd18299ac7559d97ec0a80149d265b35d2eeeb33a053826e486886a localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[localhost:30500/tasextender@sha256:519ce66d3ef90d7545f5679b670aa50393adfbe9785a720ba26ce3ec4b263c5d localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 01:31:48.342: INFO: Logging kubelet events for node node2 Oct 23 01:31:48.344: INFO: Logging pods the kubelet thinks is on node node2 Oct 23 01:31:48.369: INFO: nginx-proxy-node2 started at 2021-10-22 21:05:23 +0000 UTC (0+1 container statuses recorded) Oct 23 01:31:48.369: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 01:31:48.369: INFO: execpodrw456 started at 2021-10-23 01:29:33 +0000 UTC (0+1 container statuses recorded) Oct 23 01:31:48.369: INFO: Container agnhost-container ready: true, restart count 0 Oct 23 01:31:48.369: INFO: busybox-9fcc7120-de9d-4e22-90aa-0bf4bfff039d started at 2021-10-23 01:28:48 +0000 UTC (0+1 container statuses recorded) Oct 23 01:31:48.369: INFO: Container busybox ready: true, restart count 0 Oct 23 01:31:48.369: INFO: webserver-deployment-847dcfb7fb-q9l2j started at 2021-10-23 01:31:47 +0000 UTC (0+1 container statuses recorded) Oct 23 01:31:48.369: INFO: Container httpd ready: false, restart count 0 Oct 23 01:31:48.369: INFO: cmk-kn29k started at 2021-10-22 21:18:25 +0000 UTC (0+2 container statuses recorded) Oct 23 01:31:48.369: INFO: Container nodereport ready: true, restart count 1 Oct 23 01:31:48.369: INFO: Container reconcile ready: true, restart count 0 Oct 23 01:31:48.369: INFO: webserver-deployment-847dcfb7fb-6bs2n started at 2021-10-23 01:31:35 +0000 UTC (0+1 container statuses recorded) Oct 23 01:31:48.369: INFO: Container httpd ready: true, restart count 0 Oct 23 01:31:48.369: INFO: webserver-deployment-847dcfb7fb-q5k7h started at 2021-10-23 01:31:47 +0000 UTC (0+1 container statuses recorded) Oct 23 01:31:48.369: INFO: Container httpd ready: false, restart count 0 Oct 23 01:31:48.369: INFO: test-webserver-2f59f410-c7c2-4ba2-a4d8-0726690d56e6 started at 2021-10-23 01:31:33 +0000 UTC (0+1 container statuses recorded) Oct 23 01:31:48.369: INFO: Container test-webserver ready: true, restart count 0 Oct 23 01:31:48.369: INFO: kube-proxy-5h2bl started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 01:31:48.369: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 01:31:48.369: INFO: cmk-init-discover-node2-2btnq started at 2021-10-22 21:18:03 +0000 UTC (0+3 container statuses recorded) Oct 23 01:31:48.369: INFO: Container discover ready: false, restart count 0 Oct 23 01:31:48.369: INFO: Container init ready: false, restart count 0 Oct 23 01:31:48.369: INFO: Container install ready: false, restart count 0 Oct 23 01:31:48.369: INFO: cmk-webhook-6c9d5f8578-pkwhc started at 2021-10-22 21:18:26 +0000 UTC (0+1 container statuses recorded) Oct 23 01:31:48.369: INFO: Container cmk-webhook ready: true, restart count 0 Oct 23 01:31:48.369: INFO: webserver-deployment-847dcfb7fb-r6jbv started at 2021-10-23 01:31:35 +0000 UTC (0+1 container statuses recorded) Oct 23 01:31:48.369: INFO: Container httpd ready: true, restart count 0 Oct 23 01:31:48.369: INFO: kube-multus-ds-amd64-fww5b started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 01:31:48.369: INFO: Container kube-multus ready: true, restart count 1 Oct 23 01:31:48.369: INFO: node-feature-discovery-worker-8k8m5 started at 2021-10-22 21:14:11 +0000 UTC (0+1 container statuses recorded) Oct 23 01:31:48.369: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 01:31:48.369: INFO: execpodx294j started at 2021-10-23 01:30:26 +0000 UTC (0+1 container statuses recorded) Oct 23 01:31:48.369: INFO: Container agnhost-container ready: true, restart count 0 Oct 23 01:31:48.369: INFO: ss2-1 started at 2021-10-23 01:31:30 +0000 UTC (0+1 container statuses recorded) Oct 23 01:31:48.369: INFO: Container webserver ready: true, restart count 0 Oct 23 01:31:48.369: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zhcfq started at 2021-10-22 21:15:26 +0000 UTC (0+1 container statuses recorded) Oct 23 01:31:48.369: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 01:31:48.369: INFO: node-exporter-fjc79 started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 01:31:48.369: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 01:31:48.369: INFO: Container node-exporter ready: true, restart count 0 Oct 23 01:31:48.369: INFO: externalname-service-9qwq9 started at 2021-10-23 01:30:20 +0000 UTC (0+1 container statuses recorded) Oct 23 01:31:48.369: INFO: Container externalname-service ready: true, restart count 0 Oct 23 01:31:48.369: INFO: webserver-deployment-795d758f88-jgqqn started at 2021-10-23 01:31:45 +0000 UTC (0+1 container statuses recorded) Oct 23 01:31:48.369: INFO: Container httpd ready: false, restart count 0 Oct 23 01:31:48.369: INFO: ss2-0 started at 2021-10-23 01:31:27 +0000 UTC (0+1 container statuses recorded) Oct 23 01:31:48.370: INFO: Container webserver ready: true, restart count 0 Oct 23 01:31:48.370: INFO: webserver-deployment-795d758f88-4kng8 started at 2021-10-23 01:31:47 +0000 UTC (0+1 container statuses recorded) Oct 23 01:31:48.370: INFO: Container httpd ready: false, restart count 0 Oct 23 01:31:48.370: INFO: externalname-service-8zmdb started at 2021-10-23 01:30:20 +0000 UTC (0+1 container statuses recorded) Oct 23 01:31:48.370: INFO: Container externalname-service ready: true, restart count 0 Oct 23 01:31:48.370: INFO: webserver-deployment-847dcfb7fb-h474s started at 2021-10-23 01:31:35 +0000 UTC (0+1 container statuses recorded) Oct 23 01:31:48.370: INFO: Container httpd ready: true, restart count 0 Oct 23 01:31:48.370: INFO: webserver-deployment-847dcfb7fb-qd5xl started at 2021-10-23 01:31:35 +0000 UTC (0+1 container statuses recorded) Oct 23 01:31:48.370: INFO: Container httpd ready: true, restart count 0 Oct 23 01:31:48.370: INFO: webserver-deployment-847dcfb7fb-wph6n started at 2021-10-23 01:31:47 +0000 UTC (0+1 container statuses recorded) Oct 23 01:31:48.370: INFO: Container httpd ready: false, restart count 0 Oct 23 01:31:48.370: INFO: kube-flannel-xx6ls started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 01:31:48.370: INFO: Init container install-cni ready: true, restart count 1 Oct 23 01:31:48.370: INFO: Container kube-flannel ready: true, restart count 2 Oct 23 01:31:48.370: INFO: tas-telemetry-aware-scheduling-84ff454dfb-gltgg started at 2021-10-22 21:22:32 +0000 UTC (0+1 container statuses recorded) Oct 23 01:31:48.370: INFO: Container tas-extender ready: true, restart count 0 Oct 23 01:31:48.370: INFO: liveness-cfffd791-4501-462a-9e71-ec386380442f started at 2021-10-23 01:29:24 +0000 UTC (0+1 container statuses recorded) Oct 23 01:31:48.370: INFO: Container agnhost-container ready: true, restart count 0 Oct 23 01:31:48.370: INFO: collectd-xhdgw started at 2021-10-22 21:23:20 +0000 UTC (0+3 container statuses recorded) Oct 23 01:31:48.370: INFO: Container collectd ready: true, restart count 0 Oct 23 01:31:48.370: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 01:31:48.370: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 01:31:48.370: INFO: externalname-service-rc8fh started at 2021-10-23 01:29:24 +0000 UTC (0+1 container statuses recorded) Oct 23 01:31:48.370: INFO: Container externalname-service ready: true, restart count 0 Oct 23 01:31:48.370: INFO: ss2-2 started at 2021-10-23 01:31:33 +0000 UTC (0+1 container statuses recorded) Oct 23 01:31:48.370: INFO: Container webserver ready: true, restart count 0 Oct 23 01:31:48.370: INFO: webserver-deployment-847dcfb7fb-m6psd started at 2021-10-23 01:31:47 +0000 UTC (0+1 container statuses recorded) Oct 23 01:31:48.370: INFO: Container httpd ready: false, restart count 0 W1023 01:31:48.383999 37 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 01:31:50.658: INFO: Latency metrics for node node2 Oct 23 01:31:50.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1223" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [145.902 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 01:31:45.208: Unexpected error: <*errors.errorString | 0xc003b362e0>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32619 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32619 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1351 ------------------------------ {"msg":"FAILED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":-1,"completed":7,"skipped":85,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:31:34.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 23 01:31:34.385: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 23 01:31:36.396: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549494, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549494, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549494, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549494, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 23 01:31:38.399: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549494, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549494, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549494, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549494, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 23 01:31:40.400: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549494, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549494, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549494, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549494, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 23 01:31:43.408: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 01:31:43.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2699-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:31:51.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5615" for this suite. STEP: Destroying namespace "webhook-5615-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:17.408 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":14,"skipped":261,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:31:51.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 01:31:51.620: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:31:57.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7105" for this suite. • [SLOW TEST:5.564 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":-1,"completed":15,"skipped":271,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:30:18.189: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W1023 01:30:18.212139 25 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should replace jobs when ReplaceConcurrent [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ReplaceConcurrent cronjob STEP: Ensuring a job is scheduled STEP: Ensuring exactly one is scheduled STEP: Ensuring exactly one running job exists by listing jobs explicitly STEP: Ensuring the job is replaced with a new one STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:32:00.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-1312" for this suite. • [SLOW TEST:102.051 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should replace jobs when ReplaceConcurrent [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","total":-1,"completed":9,"skipped":168,"failed":0} SSS ------------------------------ [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:32:00.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should support creating EndpointSlice API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/discovery.k8s.io STEP: getting /apis/discovery.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Oct 23 01:32:00.293: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Oct 23 01:32:00.298: INFO: starting watch STEP: patching STEP: updating Oct 23 01:32:00.313: INFO: waiting for watch events with expected annotations Oct 23 01:32:00.313: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:32:00.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-3505" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","total":-1,"completed":10,"skipped":171,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:32:00.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:32:00.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-7497" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":-1,"completed":11,"skipped":226,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:31:50.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod Oct 23 01:31:50.720: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:32:01.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7683" for this suite. • [SLOW TEST:10.752 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":-1,"completed":8,"skipped":98,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:32:00.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0666 on tmpfs Oct 23 01:32:00.547: INFO: Waiting up to 5m0s for pod "pod-2d408553-bf03-4cf5-84d1-f689f6cc6340" in namespace "emptydir-910" to be "Succeeded or Failed" Oct 23 01:32:00.549: INFO: Pod "pod-2d408553-bf03-4cf5-84d1-f689f6cc6340": Phase="Pending", Reason="", readiness=false. Elapsed: 2.782928ms Oct 23 01:32:02.552: INFO: Pod "pod-2d408553-bf03-4cf5-84d1-f689f6cc6340": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005652463s Oct 23 01:32:04.555: INFO: Pod "pod-2d408553-bf03-4cf5-84d1-f689f6cc6340": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008795493s STEP: Saw pod success Oct 23 01:32:04.555: INFO: Pod "pod-2d408553-bf03-4cf5-84d1-f689f6cc6340" satisfied condition "Succeeded or Failed" Oct 23 01:32:04.558: INFO: Trying to get logs from node node1 pod pod-2d408553-bf03-4cf5-84d1-f689f6cc6340 container test-container: STEP: delete the pod Oct 23 01:32:04.568: INFO: Waiting for pod pod-2d408553-bf03-4cf5-84d1-f689f6cc6340 to disappear Oct 23 01:32:04.570: INFO: Pod pod-2d408553-bf03-4cf5-84d1-f689f6cc6340 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:32:04.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-910" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":240,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:32:04.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should support --unix-socket=/path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Starting the proxy Oct 23 01:32:04.637: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7238 proxy --unix-socket=/tmp/kubectl-proxy-unix677475899/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:32:04.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7238" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":-1,"completed":13,"skipped":256,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:32:01.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should test the lifecycle of a ReplicationController [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a ReplicationController STEP: waiting for RC to be added STEP: waiting for available Replicas STEP: patching ReplicationController STEP: waiting for RC to be modified STEP: patching ReplicationController status STEP: waiting for RC to be modified STEP: waiting for available Replicas STEP: fetching ReplicationController status STEP: patching ReplicationController scale STEP: waiting for RC to be modified STEP: waiting for ReplicationController's scale to be the max amount STEP: fetching ReplicationController; ensuring that it's patched STEP: updating ReplicationController status STEP: waiting for RC to be modified STEP: listing all ReplicationControllers STEP: checking that ReplicationController has expected values STEP: deleting ReplicationControllers by collection STEP: waiting for ReplicationController to have a DELETED watchEvent [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:32:06.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6878" for this suite. • [SLOW TEST:5.193 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should test the lifecycle of a ReplicationController [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":-1,"completed":9,"skipped":121,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSS ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":-1,"completed":10,"skipped":113,"failed":0} [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:31:47.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Oct 23 01:31:47.834: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Oct 23 01:31:49.842: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549507, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549507, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549508, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549507, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 23 01:31:51.846: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549507, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549507, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549508, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549507, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 23 01:31:53.847: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549507, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549507, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549508, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549507, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 23 01:31:55.846: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549507, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549507, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549508, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549507, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 23 01:31:58.853: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 01:31:58.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:32:06.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-745" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:19.797 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":-1,"completed":11,"skipped":113,"failed":0} SSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:32:04.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 23 01:32:05.168: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 23 01:32:07.183: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549525, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549525, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549525, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549525, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 23 01:32:10.196: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:32:10.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9716" for this suite. STEP: Destroying namespace "webhook-9716-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.420 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":-1,"completed":14,"skipped":324,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:32:07.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-map-23968a37-dccf-488a-8e0d-49e567667670 STEP: Creating a pod to test consume secrets Oct 23 01:32:07.093: INFO: Waiting up to 5m0s for pod "pod-secrets-f94d50f2-4c11-435b-946c-d689b5943bbd" in namespace "secrets-2806" to be "Succeeded or Failed" Oct 23 01:32:07.096: INFO: Pod "pod-secrets-f94d50f2-4c11-435b-946c-d689b5943bbd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.763074ms Oct 23 01:32:09.100: INFO: Pod "pod-secrets-f94d50f2-4c11-435b-946c-d689b5943bbd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006704041s Oct 23 01:32:11.105: INFO: Pod "pod-secrets-f94d50f2-4c11-435b-946c-d689b5943bbd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011612026s STEP: Saw pod success Oct 23 01:32:11.105: INFO: Pod "pod-secrets-f94d50f2-4c11-435b-946c-d689b5943bbd" satisfied condition "Succeeded or Failed" Oct 23 01:32:11.108: INFO: Trying to get logs from node node1 pod pod-secrets-f94d50f2-4c11-435b-946c-d689b5943bbd container secret-volume-test: STEP: delete the pod Oct 23 01:32:11.125: INFO: Waiting for pod pod-secrets-f94d50f2-4c11-435b-946c-d689b5943bbd to disappear Oct 23 01:32:11.127: INFO: Pod pod-secrets-f94d50f2-4c11-435b-946c-d689b5943bbd no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:32:11.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2806" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":116,"failed":0} SS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:32:06.707: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: submitting the pod to kubernetes Oct 23 01:32:06.740: INFO: The status of Pod pod-update-78d861ee-b9f6-40ce-83e4-d042af28881b is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:32:08.743: INFO: The status of Pod pod-update-78d861ee-b9f6-40ce-83e4-d042af28881b is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:32:10.743: INFO: The status of Pod pod-update-78d861ee-b9f6-40ce-83e4-d042af28881b is Running (Ready = true) STEP: verifying the pod is in kubernetes STEP: updating the pod Oct 23 01:32:11.258: INFO: Successfully updated pod "pod-update-78d861ee-b9f6-40ce-83e4-d042af28881b" STEP: verifying the updated pod is in kubernetes Oct 23 01:32:11.262: INFO: Pod update OK [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:32:11.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9305" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Pods should be updated [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":126,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:32:10.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod with the kernel.shm_rmid_forced sysctl STEP: Watching for error events or started pod STEP: Waiting for pod completion STEP: Checking that the pod succeeded STEP: Getting logs from the pod STEP: Checking that the sysctl is actually updated [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:32:16.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-9381" for this suite. • [SLOW TEST:6.064 seconds] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should support sysctls [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":15,"skipped":348,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:32:16.437: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Oct 23 01:32:16.468: INFO: Waiting up to 5m0s for pod "security-context-71f09221-c234-457a-b2fc-fed8dab53080" in namespace "security-context-3037" to be "Succeeded or Failed" Oct 23 01:32:16.470: INFO: Pod "security-context-71f09221-c234-457a-b2fc-fed8dab53080": Phase="Pending", Reason="", readiness=false. Elapsed: 2.246218ms Oct 23 01:32:18.474: INFO: Pod "security-context-71f09221-c234-457a-b2fc-fed8dab53080": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005678051s Oct 23 01:32:20.479: INFO: Pod "security-context-71f09221-c234-457a-b2fc-fed8dab53080": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010444128s Oct 23 01:32:22.484: INFO: Pod "security-context-71f09221-c234-457a-b2fc-fed8dab53080": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016121033s STEP: Saw pod success Oct 23 01:32:22.484: INFO: Pod "security-context-71f09221-c234-457a-b2fc-fed8dab53080" satisfied condition "Succeeded or Failed" Oct 23 01:32:22.486: INFO: Trying to get logs from node node2 pod security-context-71f09221-c234-457a-b2fc-fed8dab53080 container test-container: STEP: delete the pod Oct 23 01:32:22.497: INFO: Waiting for pod security-context-71f09221-c234-457a-b2fc-fed8dab53080 to disappear Oct 23 01:32:22.499: INFO: Pod security-context-71f09221-c234-457a-b2fc-fed8dab53080 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:32:22.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-3037" for this suite. • [SLOW TEST:6.070 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":16,"skipped":378,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:32:22.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:32:26.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9996" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":401,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:30:20.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a service externalname-service with the type=ExternalName in namespace services-2453 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-2453 I1023 01:30:20.311758 35 runners.go:190] Created replication controller with name: externalname-service, namespace: services-2453, replica count: 2 I1023 01:30:23.362917 35 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1023 01:30:26.363688 35 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 23 01:30:26.363: INFO: Creating new exec pod Oct 23 01:30:31.384: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:30:31.632: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:30:31.632: INFO: stdout: "" Oct 23 01:30:32.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:30:32.885: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:30:32.885: INFO: stdout: "" Oct 23 01:30:33.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:30:33.881: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:30:33.881: INFO: stdout: "" Oct 23 01:30:34.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:30:34.866: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:30:34.866: INFO: stdout: "" Oct 23 01:30:35.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:30:35.868: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:30:35.868: INFO: stdout: "" Oct 23 01:30:36.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:30:36.898: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:30:36.898: INFO: stdout: "" Oct 23 01:30:37.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:30:37.890: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:30:37.890: INFO: stdout: "" Oct 23 01:30:38.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:30:38.903: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:30:38.903: INFO: stdout: "" Oct 23 01:30:39.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:30:39.878: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:30:39.878: INFO: stdout: "" Oct 23 01:30:40.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:30:40.866: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:30:40.866: INFO: stdout: "" Oct 23 01:30:41.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:30:41.878: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:30:41.878: INFO: stdout: "" Oct 23 01:30:42.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:30:42.874: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:30:42.874: INFO: stdout: "" Oct 23 01:30:43.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:30:44.275: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:30:44.275: INFO: stdout: "" Oct 23 01:30:44.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:30:44.899: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:30:44.899: INFO: stdout: "" Oct 23 01:30:45.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:30:45.881: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:30:45.882: INFO: stdout: "" Oct 23 01:30:46.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:30:46.890: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:30:46.890: INFO: stdout: "" Oct 23 01:30:47.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:30:48.030: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:30:48.030: INFO: stdout: "" Oct 23 01:30:48.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:30:49.849: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:30:49.849: INFO: stdout: "" Oct 23 01:30:50.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:30:50.898: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:30:50.898: INFO: stdout: "" Oct 23 01:30:51.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:30:51.883: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:30:51.883: INFO: stdout: "" Oct 23 01:30:52.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:30:52.922: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:30:52.922: INFO: stdout: "" Oct 23 01:30:53.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:30:53.862: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:30:53.862: INFO: stdout: "" Oct 23 01:30:54.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:30:54.880: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:30:54.880: INFO: stdout: "" Oct 23 01:30:55.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:30:55.881: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:30:55.882: INFO: stdout: "" Oct 23 01:30:56.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:30:56.863: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:30:56.864: INFO: stdout: "" Oct 23 01:30:57.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:30:58.188: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:30:58.188: INFO: stdout: "" Oct 23 01:30:58.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:30:59.310: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:30:59.310: INFO: stdout: "" Oct 23 01:30:59.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:30:59.891: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:30:59.891: INFO: stdout: "" Oct 23 01:31:00.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:31:00.966: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:31:00.966: INFO: stdout: "" Oct 23 01:31:01.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:31:01.845: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:31:01.845: INFO: stdout: "" Oct 23 01:31:02.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:31:02.913: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:31:02.913: INFO: stdout: "" Oct 23 01:31:03.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:31:03.880: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:31:03.880: INFO: stdout: "" Oct 23 01:31:04.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:31:04.886: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:31:04.886: INFO: stdout: "" Oct 23 01:31:05.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:31:05.887: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:31:05.887: INFO: stdout: "" Oct 23 01:31:06.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:31:06.884: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:31:06.884: INFO: stdout: "" Oct 23 01:31:07.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:31:07.876: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:31:07.876: INFO: stdout: "" Oct 23 01:31:08.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:31:08.871: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:31:08.871: INFO: stdout: "" Oct 23 01:31:09.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:31:09.872: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:31:09.872: INFO: stdout: "" Oct 23 01:31:10.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:31:10.893: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:31:10.893: INFO: stdout: "" Oct 23 01:31:11.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:31:11.886: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:31:11.886: INFO: stdout: "" Oct 23 01:31:12.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:31:12.909: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:31:12.909: INFO: stdout: "" Oct 23 01:31:13.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:31:13.889: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:31:13.889: INFO: stdout: "" Oct 23 01:31:14.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:31:14.862: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:31:14.862: INFO: stdout: "" Oct 23 01:31:15.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:31:15.881: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:31:15.881: INFO: stdout: "" Oct 23 01:31:16.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:31:16.865: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:31:16.865: INFO: stdout: "" Oct 23 01:31:17.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:31:17.881: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:31:17.881: INFO: stdout: "" Oct 23 01:31:18.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:31:20.032: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:31:20.032: INFO: stdout: "" Oct 23 01:31:20.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:31:21.045: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:31:21.045: INFO: stdout: "" Oct 23 01:31:21.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:31:22.072: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:31:22.072: INFO: stdout: "" Oct 23 01:31:22.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:31:22.874: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:31:22.874: INFO: stdout: "" Oct 23 01:31:23.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:31:23.899: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:31:23.899: INFO: stdout: "" Oct 23 01:31:24.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:31:24.875: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:31:24.875: INFO: stdout: "" Oct 23 01:31:25.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:31:25.884: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:31:25.884: INFO: stdout: "" Oct 23 01:31:26.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:31:26.875: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:31:26.875: INFO: stdout: "" Oct 23 01:31:27.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:31:27.897: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:31:27.897: INFO: stdout: "" Oct 23 01:31:28.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:31:29.473: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:31:29.473: INFO: stdout: "" Oct 23 01:31:29.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:31:30.364: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:31:30.364: INFO: stdout: "" Oct 23 01:31:30.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:31:30.870: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:31:30.870: INFO: stdout: "" Oct 23 01:31:31.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:31:32.149: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:31:32.149: INFO: stdout: "" Oct 23 01:31:32.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:31:32.904: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:31:32.904: INFO: stdout: "" Oct 23 01:31:33.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:31:33.918: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:31:33.918: INFO: stdout: "" Oct 23 01:31:34.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:31:35.006: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:31:35.006: INFO: stdout: "" Oct 23 01:31:35.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:31:36.212: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:31:36.212: INFO: stdout: "" Oct 23 01:31:36.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:31:37.295: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:31:37.295: INFO: stdout: "" Oct 23 01:31:37.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:31:38.341: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:31:38.341: INFO: stdout: "" Oct 23 01:31:38.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:31:40.127: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:31:40.127: INFO: stdout: "" Oct 23 01:31:40.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:31:40.948: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:31:40.948: INFO: stdout: "" Oct 23 01:31:41.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:31:41.871: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:31:41.871: INFO: stdout: "" Oct 23 01:31:42.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:31:42.876: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:31:42.876: INFO: stdout: "" Oct 23 01:31:43.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:31:43.873: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:31:43.873: INFO: stdout: "" Oct 23 01:31:44.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:31:44.881: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:31:44.881: INFO: stdout: "" Oct 23 01:31:45.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:31:45.890: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:31:45.890: INFO: stdout: "" Oct 23 01:31:46.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:31:46.881: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:31:46.881: INFO: stdout: "" Oct 23 01:31:47.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:31:47.869: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:31:47.869: INFO: stdout: "" Oct 23 01:31:48.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:31:50.205: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:31:50.205: INFO: stdout: "" Oct 23 01:31:50.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:31:51.025: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:31:51.025: INFO: stdout: "" Oct 23 01:31:51.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:31:51.907: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:31:51.907: INFO: stdout: "" Oct 23 01:31:52.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:31:53.031: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:31:53.031: INFO: stdout: "" Oct 23 01:31:53.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:31:54.137: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:31:54.137: INFO: stdout: "" Oct 23 01:31:54.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:31:54.985: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:31:54.985: INFO: stdout: "" Oct 23 01:31:55.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:31:55.936: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:31:55.936: INFO: stdout: "" Oct 23 01:31:56.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:31:56.881: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:31:56.881: INFO: stdout: "" Oct 23 01:31:57.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:31:57.915: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:31:57.915: INFO: stdout: "" Oct 23 01:31:58.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:31:58.872: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:31:58.872: INFO: stdout: "" Oct 23 01:31:59.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:31:59.860: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:31:59.860: INFO: stdout: "" Oct 23 01:32:00.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:32:00.898: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:32:00.899: INFO: stdout: "" Oct 23 01:32:01.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:32:01.870: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:32:01.870: INFO: stdout: "" Oct 23 01:32:02.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:32:03.065: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:32:03.065: INFO: stdout: "" Oct 23 01:32:03.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:32:03.864: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:32:03.864: INFO: stdout: "" Oct 23 01:32:04.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:32:04.884: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:32:04.884: INFO: stdout: "" Oct 23 01:32:05.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:32:05.952: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:32:05.952: INFO: stdout: "" Oct 23 01:32:06.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:32:06.887: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:32:06.887: INFO: stdout: "" Oct 23 01:32:07.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:32:08.306: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:32:08.306: INFO: stdout: "" Oct 23 01:32:08.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:32:08.926: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:32:08.926: INFO: stdout: "" Oct 23 01:32:09.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:32:09.880: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:32:09.880: INFO: stdout: "" Oct 23 01:32:10.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:32:11.052: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:32:11.052: INFO: stdout: "" Oct 23 01:32:11.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:32:12.439: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:32:12.439: INFO: stdout: "" Oct 23 01:32:12.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:32:13.415: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:32:13.415: INFO: stdout: "" Oct 23 01:32:13.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:32:13.999: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:32:13.999: INFO: stdout: "" Oct 23 01:32:14.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:32:14.910: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:32:14.910: INFO: stdout: "" Oct 23 01:32:15.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:32:15.872: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:32:15.872: INFO: stdout: "" Oct 23 01:32:16.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:32:17.172: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:32:17.172: INFO: stdout: "" Oct 23 01:32:17.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:32:18.147: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:32:18.147: INFO: stdout: "" Oct 23 01:32:18.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:32:20.072: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:32:20.072: INFO: stdout: "" Oct 23 01:32:20.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:32:20.886: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:32:20.886: INFO: stdout: "" Oct 23 01:32:21.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:32:21.900: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:32:21.900: INFO: stdout: "" Oct 23 01:32:22.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:32:22.870: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:32:22.870: INFO: stdout: "" Oct 23 01:32:23.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:32:23.887: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:32:23.887: INFO: stdout: "" Oct 23 01:32:24.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:32:24.983: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:32:24.983: INFO: stdout: "" Oct 23 01:32:25.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:32:25.875: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:32:25.875: INFO: stdout: "" Oct 23 01:32:26.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:32:26.852: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:32:26.853: INFO: stdout: "" Oct 23 01:32:27.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:32:27.891: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:32:27.891: INFO: stdout: "" Oct 23 01:32:28.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:32:28.906: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:32:28.906: INFO: stdout: "" Oct 23 01:32:29.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:32:29.899: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:32:29.899: INFO: stdout: "" Oct 23 01:32:30.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:32:30.859: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:32:30.859: INFO: stdout: "" Oct 23 01:32:31.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:32:31.880: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:32:31.880: INFO: stdout: "" Oct 23 01:32:31.880: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec execpodx294j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 01:32:32.105: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 01:32:32.105: INFO: stdout: "" Oct 23 01:32:32.106: FAIL: Unexpected error: <*errors.errorString | 0xc00369e470>: { s: "service is not reachable within 2m0s timeout on endpoint externalname-service:80 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint externalname-service:80 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func24.14() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1312 +0x358 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00161ca80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc00161ca80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc00161ca80, 0x70e7b58) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 Oct 23 01:32:32.107: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-2453". STEP: Found 17 events. Oct 23 01:32:32.132: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpodx294j: { } Scheduled: Successfully assigned services-2453/execpodx294j to node2 Oct 23 01:32:32.132: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for externalname-service-8zmdb: { } Scheduled: Successfully assigned services-2453/externalname-service-8zmdb to node2 Oct 23 01:32:32.132: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for externalname-service-9qwq9: { } Scheduled: Successfully assigned services-2453/externalname-service-9qwq9 to node2 Oct 23 01:32:32.132: INFO: At 2021-10-23 01:30:20 +0000 UTC - event for externalname-service: {replication-controller } SuccessfulCreate: Created pod: externalname-service-8zmdb Oct 23 01:32:32.132: INFO: At 2021-10-23 01:30:20 +0000 UTC - event for externalname-service: {replication-controller } SuccessfulCreate: Created pod: externalname-service-9qwq9 Oct 23 01:32:32.132: INFO: At 2021-10-23 01:30:22 +0000 UTC - event for externalname-service-9qwq9: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 23 01:32:32.132: INFO: At 2021-10-23 01:30:23 +0000 UTC - event for externalname-service-8zmdb: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 23 01:32:32.132: INFO: At 2021-10-23 01:30:23 +0000 UTC - event for externalname-service-8zmdb: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 321.264219ms Oct 23 01:32:32.132: INFO: At 2021-10-23 01:30:23 +0000 UTC - event for externalname-service-9qwq9: {kubelet node2} Started: Started container externalname-service Oct 23 01:32:32.132: INFO: At 2021-10-23 01:30:23 +0000 UTC - event for externalname-service-9qwq9: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 355.286055ms Oct 23 01:32:32.132: INFO: At 2021-10-23 01:30:23 +0000 UTC - event for externalname-service-9qwq9: {kubelet node2} Created: Created container externalname-service Oct 23 01:32:32.132: INFO: At 2021-10-23 01:30:24 +0000 UTC - event for externalname-service-8zmdb: {kubelet node2} Started: Started container externalname-service Oct 23 01:32:32.132: INFO: At 2021-10-23 01:30:24 +0000 UTC - event for externalname-service-8zmdb: {kubelet node2} Created: Created container externalname-service Oct 23 01:32:32.132: INFO: At 2021-10-23 01:30:27 +0000 UTC - event for execpodx294j: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 23 01:32:32.132: INFO: At 2021-10-23 01:30:28 +0000 UTC - event for execpodx294j: {kubelet node2} Started: Started container agnhost-container Oct 23 01:32:32.132: INFO: At 2021-10-23 01:30:28 +0000 UTC - event for execpodx294j: {kubelet node2} Created: Created container agnhost-container Oct 23 01:32:32.132: INFO: At 2021-10-23 01:30:28 +0000 UTC - event for execpodx294j: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 369.422163ms Oct 23 01:32:32.135: INFO: POD NODE PHASE GRACE CONDITIONS Oct 23 01:32:32.135: INFO: execpodx294j node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:30:26 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:30:29 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:30:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:30:26 +0000 UTC }] Oct 23 01:32:32.135: INFO: externalname-service-8zmdb node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:30:20 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:30:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:30:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:30:20 +0000 UTC }] Oct 23 01:32:32.135: INFO: externalname-service-9qwq9 node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:30:20 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:30:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:30:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:30:20 +0000 UTC }] Oct 23 01:32:32.135: INFO: Oct 23 01:32:32.139: INFO: Logging node info for node master1 Oct 23 01:32:32.141: INFO: Node Info: &Node{ObjectMeta:{master1 1b0e9b6c-fa73-4303-880f-3c662903b3ba 91831 0 2021-10-22 21:03:37 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-22 21:03:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2021-10-22 21:03:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2021-10-22 21:06:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2021-10-22 21:11:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:07 +0000 UTC,LastTransitionTime:2021-10-22 21:09:07 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 01:32:27 +0000 UTC,LastTransitionTime:2021-10-22 21:03:34 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 01:32:27 +0000 UTC,LastTransitionTime:2021-10-22 21:03:34 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 01:32:27 +0000 UTC,LastTransitionTime:2021-10-22 21:03:34 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 01:32:27 +0000 UTC,LastTransitionTime:2021-10-22 21:09:03 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:30ce143f9c9243b59253027a77cdbf77,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:e78651c4-73ca-42e7-8083-bc7c7ebac4b6,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:519ce66d3ef90d7545f5679b670aa50393adfbe9785a720ba26ce3ec4b263c5d tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:2012644549052fa07c43b0d19f320c871a25e105d0b23e33645e4f1bcf8fcd97 nginx:1.20.1-alpine],SizeBytes:22650454,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 01:32:32.142: INFO: Logging kubelet events for node master1 Oct 23 01:32:32.144: INFO: Logging pods the kubelet thinks is on node master1 Oct 23 01:32:32.153: INFO: coredns-8474476ff8-q8d8x started at 2021-10-22 21:07:01 +0000 UTC (0+1 container statuses recorded) Oct 23 01:32:32.153: INFO: Container coredns ready: true, restart count 2 Oct 23 01:32:32.153: INFO: container-registry-65d7c44b96-wtz5j started at 2021-10-22 21:10:37 +0000 UTC (0+2 container statuses recorded) Oct 23 01:32:32.153: INFO: Container docker-registry ready: true, restart count 0 Oct 23 01:32:32.153: INFO: Container nginx ready: true, restart count 0 Oct 23 01:32:32.153: INFO: node-exporter-fxb7q started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 01:32:32.153: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 01:32:32.153: INFO: Container node-exporter ready: true, restart count 0 Oct 23 01:32:32.153: INFO: kube-apiserver-master1 started at 2021-10-22 21:09:03 +0000 UTC (0+1 container statuses recorded) Oct 23 01:32:32.153: INFO: Container kube-apiserver ready: true, restart count 0 Oct 23 01:32:32.153: INFO: kube-controller-manager-master1 started at 2021-10-22 21:12:54 +0000 UTC (0+1 container statuses recorded) Oct 23 01:32:32.153: INFO: Container kube-controller-manager ready: true, restart count 1 Oct 23 01:32:32.153: INFO: kube-proxy-fhqkt started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 01:32:32.153: INFO: Container kube-proxy ready: true, restart count 1 Oct 23 01:32:32.153: INFO: kube-flannel-8vnf2 started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 01:32:32.153: INFO: Init container install-cni ready: true, restart count 1 Oct 23 01:32:32.153: INFO: Container kube-flannel ready: true, restart count 1 Oct 23 01:32:32.153: INFO: kube-multus-ds-amd64-vl8qj started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 01:32:32.153: INFO: Container kube-multus ready: true, restart count 1 Oct 23 01:32:32.153: INFO: kube-scheduler-master1 started at 2021-10-22 21:22:33 +0000 UTC (0+1 container statuses recorded) Oct 23 01:32:32.153: INFO: Container kube-scheduler ready: true, restart count 0 W1023 01:32:32.169016 35 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 01:32:32.237: INFO: Latency metrics for node master1 Oct 23 01:32:32.237: INFO: Logging node info for node master2 Oct 23 01:32:32.240: INFO: Node Info: &Node{ObjectMeta:{master2 48070097-b11c-473d-9240-f4ee02bd7e2f 91868 0 2021-10-22 21:04:08 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-22 21:04:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-10-22 21:17:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:14 +0000 UTC,LastTransitionTime:2021-10-22 21:09:14 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 01:32:31 +0000 UTC,LastTransitionTime:2021-10-22 21:04:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 01:32:31 +0000 UTC,LastTransitionTime:2021-10-22 21:04:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 01:32:31 +0000 UTC,LastTransitionTime:2021-10-22 21:04:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 01:32:31 +0000 UTC,LastTransitionTime:2021-10-22 21:06:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c5d510cf1060448cb87a1d02cd1f2972,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:8ec7c43d-60d2-4abb-84a1-5a37f3283118,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 01:32:32.240: INFO: Logging kubelet events for node master2 Oct 23 01:32:32.243: INFO: Logging pods the kubelet thinks is on node master2 Oct 23 01:32:32.250: INFO: kube-controller-manager-master2 started at 2021-10-22 21:12:54 +0000 UTC (0+1 container statuses recorded) Oct 23 01:32:32.250: INFO: Container kube-controller-manager ready: true, restart count 2 Oct 23 01:32:32.250: INFO: kube-scheduler-master2 started at 2021-10-22 21:12:54 +0000 UTC (0+1 container statuses recorded) Oct 23 01:32:32.250: INFO: Container kube-scheduler ready: true, restart count 2 Oct 23 01:32:32.251: INFO: kube-proxy-2xlf2 started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 01:32:32.251: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 01:32:32.251: INFO: kube-flannel-tfkj9 started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 01:32:32.251: INFO: Init container install-cni ready: true, restart count 2 Oct 23 01:32:32.251: INFO: Container kube-flannel ready: true, restart count 1 Oct 23 01:32:32.251: INFO: kube-multus-ds-amd64-m8ztc started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 01:32:32.251: INFO: Container kube-multus ready: true, restart count 1 Oct 23 01:32:32.251: INFO: kube-apiserver-master2 started at 2021-10-22 21:04:46 +0000 UTC (0+1 container statuses recorded) Oct 23 01:32:32.251: INFO: Container kube-apiserver ready: true, restart count 0 Oct 23 01:32:32.251: INFO: dns-autoscaler-7df78bfcfb-9ss69 started at 2021-10-22 21:06:58 +0000 UTC (0+1 container statuses recorded) Oct 23 01:32:32.251: INFO: Container autoscaler ready: true, restart count 1 Oct 23 01:32:32.251: INFO: node-exporter-vljkh started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 01:32:32.251: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 01:32:32.251: INFO: Container node-exporter ready: true, restart count 0 W1023 01:32:32.265872 35 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 01:32:32.333: INFO: Latency metrics for node master2 Oct 23 01:32:32.333: INFO: Logging node info for node master3 Oct 23 01:32:32.336: INFO: Node Info: &Node{ObjectMeta:{master3 fe22a467-e2de-4b64-9399-d274e6d13231 91802 0 2021-10-22 21:04:18 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-22 21:04:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-10-22 21:14:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2021-10-22 21:14:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:08 +0000 UTC,LastTransitionTime:2021-10-22 21:09:08 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 01:32:26 +0000 UTC,LastTransitionTime:2021-10-22 21:04:18 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 01:32:26 +0000 UTC,LastTransitionTime:2021-10-22 21:04:18 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 01:32:26 +0000 UTC,LastTransitionTime:2021-10-22 21:04:18 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 01:32:26 +0000 UTC,LastTransitionTime:2021-10-22 21:09:03 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:55ed55d7ecb94c5fbcecb32cb3747801,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:7e00baa8-f631-4d7e-baa1-cb915fbb1ea7,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 01:32:32.336: INFO: Logging kubelet events for node master3 Oct 23 01:32:32.339: INFO: Logging pods the kubelet thinks is on node master3 Oct 23 01:32:32.349: INFO: kube-controller-manager-master3 started at 2021-10-22 21:09:03 +0000 UTC (0+1 container statuses recorded) Oct 23 01:32:32.349: INFO: Container kube-controller-manager ready: true, restart count 2 Oct 23 01:32:32.349: INFO: kube-proxy-l7st4 started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 01:32:32.349: INFO: Container kube-proxy ready: true, restart count 1 Oct 23 01:32:32.349: INFO: kube-flannel-rf9mv started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 01:32:32.349: INFO: Init container install-cni ready: true, restart count 1 Oct 23 01:32:32.349: INFO: Container kube-flannel ready: true, restart count 1 Oct 23 01:32:32.349: INFO: node-feature-discovery-controller-cff799f9f-dgsfd started at 2021-10-22 21:14:11 +0000 UTC (0+1 container statuses recorded) Oct 23 01:32:32.349: INFO: Container nfd-controller ready: true, restart count 0 Oct 23 01:32:32.349: INFO: node-exporter-b22mw started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 01:32:32.349: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 01:32:32.349: INFO: Container node-exporter ready: true, restart count 0 Oct 23 01:32:32.349: INFO: kube-apiserver-master3 started at 2021-10-22 21:09:03 +0000 UTC (0+1 container statuses recorded) Oct 23 01:32:32.349: INFO: Container kube-apiserver ready: true, restart count 0 Oct 23 01:32:32.349: INFO: kube-multus-ds-amd64-tfbmd started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 01:32:32.349: INFO: Container kube-multus ready: true, restart count 1 Oct 23 01:32:32.349: INFO: coredns-8474476ff8-7wlfp started at 2021-10-22 21:06:56 +0000 UTC (0+1 container statuses recorded) Oct 23 01:32:32.349: INFO: Container coredns ready: true, restart count 2 Oct 23 01:32:32.349: INFO: kube-scheduler-master3 started at 2021-10-22 21:04:46 +0000 UTC (0+1 container statuses recorded) Oct 23 01:32:32.349: INFO: Container kube-scheduler ready: true, restart count 2 W1023 01:32:32.364749 35 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 01:32:32.433: INFO: Latency metrics for node master3 Oct 23 01:32:32.433: INFO: Logging node info for node node1 Oct 23 01:32:32.436: INFO: Node Info: &Node{ObjectMeta:{node1 1c590bf6-8845-4681-8fa1-7acc55183d29 91749 0 2021-10-22 21:05:23 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-10-22 21:05:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-10-22 21:05:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-22 21:14:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-22 21:17:46 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-10-23 01:19:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:10 +0000 UTC,LastTransitionTime:2021-10-22 21:09:10 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 01:32:23 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 01:32:23 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 01:32:23 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 01:32:23 +0000 UTC,LastTransitionTime:2021-10-22 21:06:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f11a4b4c58ac4a4eb06ac043edeefa84,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:50e64d70-ffd2-496a-957a-81f1931a6b6e,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003429679,},ContainerImage{Names:[localhost:30500/cmk@sha256:ba2eda55192ece5488254511709b932e8a99f600af8261a9f2a89d0dbc9b8fec cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:c3256608afd18299ac7559d97ec0a80149d265b35d2eeeb33a053826e486886a nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:a296b4c6f6ee2b88f095b61e95c7ef4f51ba25598835b4978c9256d8c8ace48a alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 01:32:32.436: INFO: Logging kubelet events for node node1 Oct 23 01:32:32.438: INFO: Logging pods the kubelet thinks is on node node1 Oct 23 01:32:32.456: INFO: kubernetes-dashboard-785dcbb76d-kc4kh started at 2021-10-22 21:07:01 +0000 UTC (0+1 container statuses recorded) Oct 23 01:32:32.456: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 23 01:32:32.456: INFO: prometheus-k8s-0 started at 2021-10-22 21:19:48 +0000 UTC (0+4 container statuses recorded) Oct 23 01:32:32.456: INFO: Container config-reloader ready: true, restart count 0 Oct 23 01:32:32.456: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 23 01:32:32.456: INFO: Container grafana ready: true, restart count 0 Oct 23 01:32:32.456: INFO: Container prometheus ready: true, restart count 1 Oct 23 01:32:32.456: INFO: collectd-n9sbv started at 2021-10-22 21:23:20 +0000 UTC (0+3 container statuses recorded) Oct 23 01:32:32.456: INFO: Container collectd ready: true, restart count 0 Oct 23 01:32:32.456: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 01:32:32.456: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 01:32:32.456: INFO: client-containers-79ef9dff-4157-471f-a204-5489bad6e247 started at 2021-10-23 01:32:22 +0000 UTC (0+1 container statuses recorded) Oct 23 01:32:32.456: INFO: Container agnhost-container ready: true, restart count 0 Oct 23 01:32:32.456: INFO: prometheus-operator-585ccfb458-hwjk2 started at 2021-10-22 21:19:21 +0000 UTC (0+2 container statuses recorded) Oct 23 01:32:32.456: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 01:32:32.456: INFO: Container prometheus-operator ready: true, restart count 0 Oct 23 01:32:32.456: INFO: test-pod started at 2021-10-23 01:29:07 +0000 UTC (0+1 container statuses recorded) Oct 23 01:32:32.456: INFO: Container webserver ready: true, restart count 0 Oct 23 01:32:32.456: INFO: var-expansion-9cb674eb-d5dc-4f84-9ce2-f7a33e33d32a started at 2021-10-23 01:31:57 +0000 UTC (0+1 container statuses recorded) Oct 23 01:32:32.456: INFO: Container dapi-container ready: false, restart count 0 Oct 23 01:32:32.456: INFO: kube-proxy-m9z8s started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 01:32:32.456: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 01:32:32.456: INFO: kube-multus-ds-amd64-l97s4 started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 01:32:32.456: INFO: Container kube-multus ready: true, restart count 1 Oct 23 01:32:32.456: INFO: node-feature-discovery-worker-2pvq5 started at 2021-10-22 21:14:11 +0000 UTC (0+1 container statuses recorded) Oct 23 01:32:32.456: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 01:32:32.456: INFO: replace-27249211-7wtt7 started at 2021-10-23 01:31:00 +0000 UTC (0+1 container statuses recorded) Oct 23 01:32:32.456: INFO: Container c ready: false, restart count 0 Oct 23 01:32:32.456: INFO: cmk-init-discover-node1-c599w started at 2021-10-22 21:17:43 +0000 UTC (0+3 container statuses recorded) Oct 23 01:32:32.456: INFO: Container discover ready: false, restart count 0 Oct 23 01:32:32.456: INFO: Container init ready: false, restart count 0 Oct 23 01:32:32.456: INFO: Container install ready: false, restart count 0 Oct 23 01:32:32.456: INFO: cmk-t9r2t started at 2021-10-22 21:18:25 +0000 UTC (0+2 container statuses recorded) Oct 23 01:32:32.456: INFO: Container nodereport ready: true, restart count 0 Oct 23 01:32:32.457: INFO: Container reconcile ready: true, restart count 0 Oct 23 01:32:32.457: INFO: nginx-proxy-node1 started at 2021-10-22 21:05:23 +0000 UTC (0+1 container statuses recorded) Oct 23 01:32:32.457: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 01:32:32.457: INFO: kube-flannel-2cdvd started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 01:32:32.457: INFO: Init container install-cni ready: true, restart count 2 Oct 23 01:32:32.457: INFO: Container kube-flannel ready: true, restart count 3 Oct 23 01:32:32.457: INFO: kubernetes-metrics-scraper-5558854cb-dfn2n started at 2021-10-22 21:07:01 +0000 UTC (0+1 container statuses recorded) Oct 23 01:32:32.457: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 23 01:32:32.457: INFO: node-exporter-v656r started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 01:32:32.457: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 01:32:32.457: INFO: Container node-exporter ready: true, restart count 0 Oct 23 01:32:32.457: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-sjjtd started at 2021-10-22 21:15:26 +0000 UTC (0+1 container statuses recorded) Oct 23 01:32:32.457: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 01:32:32.457: INFO: oidc-discovery-validator started at 2021-10-23 01:32:11 +0000 UTC (0+1 container statuses recorded) Oct 23 01:32:32.457: INFO: Container oidc-discovery-validator ready: false, restart count 0 Oct 23 01:32:32.457: INFO: ss-0 started at 2021-10-23 01:32:26 +0000 UTC (0+1 container statuses recorded) Oct 23 01:32:32.457: INFO: Container webserver ready: true, restart count 0 W1023 01:32:32.470960 35 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 01:32:32.744: INFO: Latency metrics for node node1 Oct 23 01:32:32.744: INFO: Logging node info for node node2 Oct 23 01:32:32.747: INFO: Node Info: &Node{ObjectMeta:{node2 bdba54c1-d4eb-4c09-a343-50f320ccb048 91873 0 2021-10-22 21:05:23 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-10-22 21:05:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-10-22 21:05:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-22 21:14:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-22 21:18:08 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-10-23 01:20:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:example.com/fakecpu":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {e2e.test Update v1 2021-10-23 01:28:00 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:08 +0000 UTC,LastTransitionTime:2021-10-22 21:09:08 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 01:32:32 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 01:32:32 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 01:32:32 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 01:32:32 +0000 UTC,LastTransitionTime:2021-10-22 21:06:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:82312646736a4d47a5e2182417308818,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:045f38e2-ca45-4931-a8ac-a14f5e34cbd2,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[localhost:30500/cmk@sha256:ba2eda55192ece5488254511709b932e8a99f600af8261a9f2a89d0dbc9b8fec localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:c3256608afd18299ac7559d97ec0a80149d265b35d2eeeb33a053826e486886a localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[localhost:30500/tasextender@sha256:519ce66d3ef90d7545f5679b670aa50393adfbe9785a720ba26ce3ec4b263c5d localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 01:32:32.748: INFO: Logging kubelet events for node node2 Oct 23 01:32:32.750: INFO: Logging pods the kubelet thinks is on node node2 Oct 23 01:32:32.765: INFO: externalname-service-8zmdb started at 2021-10-23 01:30:20 +0000 UTC (0+1 container statuses recorded) Oct 23 01:32:32.765: INFO: Container externalname-service ready: true, restart count 0 Oct 23 01:32:32.765: INFO: tas-telemetry-aware-scheduling-84ff454dfb-gltgg started at 2021-10-22 21:22:32 +0000 UTC (0+1 container statuses recorded) Oct 23 01:32:32.765: INFO: Container tas-extender ready: true, restart count 0 Oct 23 01:32:32.765: INFO: liveness-cfffd791-4501-462a-9e71-ec386380442f started at 2021-10-23 01:29:24 +0000 UTC (0+1 container statuses recorded) Oct 23 01:32:32.766: INFO: Container agnhost-container ready: true, restart count 0 Oct 23 01:32:32.766: INFO: kube-flannel-xx6ls started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 01:32:32.766: INFO: Init container install-cni ready: true, restart count 1 Oct 23 01:32:32.766: INFO: Container kube-flannel ready: true, restart count 2 Oct 23 01:32:32.766: INFO: collectd-xhdgw started at 2021-10-22 21:23:20 +0000 UTC (0+3 container statuses recorded) Oct 23 01:32:32.766: INFO: Container collectd ready: true, restart count 0 Oct 23 01:32:32.766: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 01:32:32.766: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 01:32:32.766: INFO: busybox-9fcc7120-de9d-4e22-90aa-0bf4bfff039d started at 2021-10-23 01:28:48 +0000 UTC (0+1 container statuses recorded) Oct 23 01:32:32.766: INFO: Container busybox ready: true, restart count 0 Oct 23 01:32:32.766: INFO: nginx-proxy-node2 started at 2021-10-22 21:05:23 +0000 UTC (0+1 container statuses recorded) Oct 23 01:32:32.766: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 01:32:32.766: INFO: test-webserver-2f59f410-c7c2-4ba2-a4d8-0726690d56e6 started at 2021-10-23 01:31:33 +0000 UTC (0+1 container statuses recorded) Oct 23 01:32:32.766: INFO: Container test-webserver ready: true, restart count 0 Oct 23 01:32:32.766: INFO: cmk-kn29k started at 2021-10-22 21:18:25 +0000 UTC (0+2 container statuses recorded) Oct 23 01:32:32.766: INFO: Container nodereport ready: true, restart count 1 Oct 23 01:32:32.766: INFO: Container reconcile ready: true, restart count 0 Oct 23 01:32:32.766: INFO: cmk-init-discover-node2-2btnq started at 2021-10-22 21:18:03 +0000 UTC (0+3 container statuses recorded) Oct 23 01:32:32.766: INFO: Container discover ready: false, restart count 0 Oct 23 01:32:32.766: INFO: Container init ready: false, restart count 0 Oct 23 01:32:32.766: INFO: Container install ready: false, restart count 0 Oct 23 01:32:32.766: INFO: cmk-webhook-6c9d5f8578-pkwhc started at 2021-10-22 21:18:26 +0000 UTC (0+1 container statuses recorded) Oct 23 01:32:32.766: INFO: Container cmk-webhook ready: true, restart count 0 Oct 23 01:32:32.766: INFO: ss2-2 started at 2021-10-23 01:32:24 +0000 UTC (0+1 container statuses recorded) Oct 23 01:32:32.766: INFO: Container webserver ready: true, restart count 0 Oct 23 01:32:32.766: INFO: kube-proxy-5h2bl started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 01:32:32.766: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 01:32:32.766: INFO: node-feature-discovery-worker-8k8m5 started at 2021-10-22 21:14:11 +0000 UTC (0+1 container statuses recorded) Oct 23 01:32:32.766: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 01:32:32.766: INFO: execpodx294j started at 2021-10-23 01:30:26 +0000 UTC (0+1 container statuses recorded) Oct 23 01:32:32.766: INFO: Container agnhost-container ready: true, restart count 0 Oct 23 01:32:32.766: INFO: ss2-1 started at 2021-10-23 01:31:30 +0000 UTC (0+1 container statuses recorded) Oct 23 01:32:32.766: INFO: Container webserver ready: false, restart count 0 Oct 23 01:32:32.766: INFO: kube-multus-ds-amd64-fww5b started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 01:32:32.766: INFO: Container kube-multus ready: true, restart count 1 Oct 23 01:32:32.766: INFO: node-exporter-fjc79 started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 01:32:32.766: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 01:32:32.766: INFO: Container node-exporter ready: true, restart count 0 Oct 23 01:32:32.766: INFO: externalname-service-9qwq9 started at 2021-10-23 01:30:20 +0000 UTC (0+1 container statuses recorded) Oct 23 01:32:32.766: INFO: Container externalname-service ready: true, restart count 0 Oct 23 01:32:32.766: INFO: ss2-0 started at 2021-10-23 01:31:27 +0000 UTC (0+1 container statuses recorded) Oct 23 01:32:32.766: INFO: Container webserver ready: true, restart count 0 Oct 23 01:32:32.766: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zhcfq started at 2021-10-22 21:15:26 +0000 UTC (0+1 container statuses recorded) Oct 23 01:32:32.766: INFO: Container kube-sriovdp ready: true, restart count 0 W1023 01:32:32.787952 35 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 01:32:33.061: INFO: Latency metrics for node node2 Oct 23 01:32:33.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2453" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [132.797 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 01:32:32.106: Unexpected error: <*errors.errorString | 0xc00369e470>: { s: "service is not reachable within 2m0s timeout on endpoint externalname-service:80 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint externalname-service:80 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1312 ------------------------------ {"msg":"FAILED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":-1,"completed":13,"skipped":388,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]} SSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:32:33.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Oct 23 01:32:33.126: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8ceca114-d17f-4979-af7a-af704577fe73" in namespace "projected-5589" to be "Succeeded or Failed" Oct 23 01:32:33.128: INFO: Pod "downwardapi-volume-8ceca114-d17f-4979-af7a-af704577fe73": Phase="Pending", Reason="", readiness=false. Elapsed: 1.826227ms Oct 23 01:32:35.131: INFO: Pod "downwardapi-volume-8ceca114-d17f-4979-af7a-af704577fe73": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004947113s Oct 23 01:32:37.135: INFO: Pod "downwardapi-volume-8ceca114-d17f-4979-af7a-af704577fe73": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00889819s STEP: Saw pod success Oct 23 01:32:37.135: INFO: Pod "downwardapi-volume-8ceca114-d17f-4979-af7a-af704577fe73" satisfied condition "Succeeded or Failed" Oct 23 01:32:37.137: INFO: Trying to get logs from node node2 pod downwardapi-volume-8ceca114-d17f-4979-af7a-af704577fe73 container client-container: STEP: delete the pod Oct 23 01:32:37.149: INFO: Waiting for pod downwardapi-volume-8ceca114-d17f-4979-af7a-af704577fe73 to disappear Oct 23 01:32:37.151: INFO: Pod downwardapi-volume-8ceca114-d17f-4979-af7a-af704577fe73 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:32:37.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5589" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":394,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:32:37.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 01:32:37.225: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Oct 23 01:32:42.230: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Oct 23 01:32:42.230: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Oct 23 01:32:42.243: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-4981 02a895a9-2855-4ebd-a9e7-b01b536a84ae 92039 1 2021-10-23 01:32:42 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2021-10-23 01:32:42 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00506a5c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Oct 23 01:32:42.246: INFO: New ReplicaSet "test-cleanup-deployment-5b4d99b59b" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-5b4d99b59b deployment-4981 8e93b8e5-6571-4fdc-97d9-a63c0f04fd3b 92041 1 2021-10-23 01:32:42 +0000 UTC map[name:cleanup-pod pod-template-hash:5b4d99b59b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 02a895a9-2855-4ebd-a9e7-b01b536a84ae 0xc00506a9f7 0xc00506a9f8}] [] [{kube-controller-manager Update apps/v1 2021-10-23 01:32:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"02a895a9-2855-4ebd-a9e7-b01b536a84ae\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 5b4d99b59b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:5b4d99b59b] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00506aa88 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Oct 23 01:32:42.246: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Oct 23 01:32:42.246: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-4981 339ee69f-091e-4ca2-854f-b26348145a79 92040 1 2021-10-23 01:32:37 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 02a895a9-2855-4ebd-a9e7-b01b536a84ae 0xc00506a8e7 0xc00506a8e8}] [] [{e2e.test Update apps/v1 2021-10-23 01:32:37 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-10-23 01:32:42 +0000 UTC FieldsV1 {"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"02a895a9-2855-4ebd-a9e7-b01b536a84ae\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00506a988 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Oct 23 01:32:42.249: INFO: Pod "test-cleanup-controller-kvq87" is available: &Pod{ObjectMeta:{test-cleanup-controller-kvq87 test-cleanup-controller- deployment-4981 a1c243bf-be8b-48a7-833b-8298e2d2ea30 92017 0 2021-10-23 01:32:37 +0000 UTC map[name:cleanup-pod pod:httpd] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.235" ], "mac": "0a:ee:62:ff:19:1e", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.235" ], "mac": "0a:ee:62:ff:19:1e", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-cleanup-controller 339ee69f-091e-4ca2-854f-b26348145a79 0xc00506aeb7 0xc00506aeb8}] [] [{kube-controller-manager Update v1 2021-10-23 01:32:37 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"339ee69f-091e-4ca2-854f-b26348145a79\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-10-23 01:32:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-10-23 01:32:39 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.235\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-pxbdq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pxbdq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:32:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:32:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:32:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:32:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.235,StartTime:2021-10-23 01:32:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-23 01:32:39 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://40bc2e261d87d8a99991e14e3e0602621627d69bd4e6b856532529a079d206bc,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.235,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:32:42.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4981" for this suite. • [SLOW TEST:5.062 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":-1,"completed":15,"skipped":413,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]} SSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:32:11.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 01:32:11.371: INFO: created pod Oct 23 01:32:11.371: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-5622" to be "Succeeded or Failed" Oct 23 01:32:11.375: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 3.226499ms Oct 23 01:32:13.379: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007128543s Oct 23 01:32:15.384: INFO: Pod "oidc-discovery-validator": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012233317s STEP: Saw pod success Oct 23 01:32:15.384: INFO: Pod "oidc-discovery-validator" satisfied condition "Succeeded or Failed" Oct 23 01:32:45.384: INFO: polling logs Oct 23 01:32:45.392: INFO: Pod logs: 2021/10/23 01:32:14 OK: Got token 2021/10/23 01:32:14 validating with in-cluster discovery 2021/10/23 01:32:14 OK: got issuer https://kubernetes.default.svc.cluster.local 2021/10/23 01:32:14 Full, not-validated claims: openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc.cluster.local", Subject:"system:serviceaccount:svcaccounts-5622:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1634953331, NotBefore:1634952731, IssuedAt:1634952731, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-5622", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"33e3d00c-d663-4edb-894d-06e5d77eb2df"}}} 2021/10/23 01:32:14 OK: Constructed OIDC provider for issuer https://kubernetes.default.svc.cluster.local 2021/10/23 01:32:14 OK: Validated signature on JWT 2021/10/23 01:32:14 OK: Got valid claims from token! 2021/10/23 01:32:14 Full, validated claims: &openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc.cluster.local", Subject:"system:serviceaccount:svcaccounts-5622:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1634953331, NotBefore:1634952731, IssuedAt:1634952731, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-5622", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"33e3d00c-d663-4edb-894d-06e5d77eb2df"}}} Oct 23 01:32:45.392: INFO: completed pod [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:32:45.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-5622" for this suite. • [SLOW TEST:34.071 seconds] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","total":-1,"completed":11,"skipped":166,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:32:26.641: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-8166 [It] should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating statefulset ss in namespace statefulset-8166 Oct 23 01:32:26.675: INFO: Found 0 stateful pods, waiting for 1 Oct 23 01:32:36.685: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified STEP: Patch a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Oct 23 01:32:36.706: INFO: Deleting all statefulset in ns statefulset-8166 Oct 23 01:32:36.708: INFO: Scaling statefulset ss to 0 Oct 23 01:32:46.722: INFO: Waiting for statefulset status.replicas updated to 0 Oct 23 01:32:46.725: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:32:46.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8166" for this suite. • [SLOW TEST:20.100 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":-1,"completed":18,"skipped":417,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:32:46.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] Replicaset should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating replica set "test-rs" that asks for more than the allowed pod quota Oct 23 01:32:46.809: INFO: Pod name sample-pod: Found 0 pods out of 1 Oct 23 01:32:51.815: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the replicaset Spec.Replicas was modified STEP: Patch a scale subresource [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:32:51.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-1273" for this suite. • [SLOW TEST:5.055 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Replicaset should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]","total":-1,"completed":19,"skipped":439,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:32:45.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name projected-secret-test-08da0cd5-6694-4cc6-93c5-dcfffc93567c STEP: Creating a pod to test consume secrets Oct 23 01:32:45.473: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b1267c00-1378-47f2-a0ef-5c02dd663da2" in namespace "projected-8186" to be "Succeeded or Failed" Oct 23 01:32:45.475: INFO: Pod "pod-projected-secrets-b1267c00-1378-47f2-a0ef-5c02dd663da2": Phase="Pending", Reason="", readiness=false. Elapsed: 1.925345ms Oct 23 01:32:47.479: INFO: Pod "pod-projected-secrets-b1267c00-1378-47f2-a0ef-5c02dd663da2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006357341s Oct 23 01:32:49.484: INFO: Pod "pod-projected-secrets-b1267c00-1378-47f2-a0ef-5c02dd663da2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010634192s Oct 23 01:32:51.488: INFO: Pod "pod-projected-secrets-b1267c00-1378-47f2-a0ef-5c02dd663da2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014563651s Oct 23 01:32:53.492: INFO: Pod "pod-projected-secrets-b1267c00-1378-47f2-a0ef-5c02dd663da2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.018678076s STEP: Saw pod success Oct 23 01:32:53.492: INFO: Pod "pod-projected-secrets-b1267c00-1378-47f2-a0ef-5c02dd663da2" satisfied condition "Succeeded or Failed" Oct 23 01:32:53.494: INFO: Trying to get logs from node node2 pod pod-projected-secrets-b1267c00-1378-47f2-a0ef-5c02dd663da2 container secret-volume-test: STEP: delete the pod Oct 23 01:32:53.580: INFO: Waiting for pod pod-projected-secrets-b1267c00-1378-47f2-a0ef-5c02dd663da2 to disappear Oct 23 01:32:53.582: INFO: Pod pod-projected-secrets-b1267c00-1378-47f2-a0ef-5c02dd663da2 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:32:53.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8186" for this suite. • [SLOW TEST:8.158 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":178,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:28:48.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod busybox-9fcc7120-de9d-4e22-90aa-0bf4bfff039d in namespace container-probe-3342 Oct 23 01:28:54.052: INFO: Started pod busybox-9fcc7120-de9d-4e22-90aa-0bf4bfff039d in namespace container-probe-3342 STEP: checking the pod's current state and verifying that restartCount is present Oct 23 01:28:54.054: INFO: Initial restart count of pod busybox-9fcc7120-de9d-4e22-90aa-0bf4bfff039d is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:32:54.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3342" for this suite. • [SLOW TEST:246.577 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":124,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:32:51.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 01:32:55.896: INFO: Deleting pod "var-expansion-bf543fc8-cf60-4d72-8c64-7caacf82282a" in namespace "var-expansion-8996" Oct 23 01:32:55.902: INFO: Wait up to 5m0s for pod "var-expansion-bf543fc8-cf60-4d72-8c64-7caacf82282a" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:33:05.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8996" for this suite. • [SLOW TEST:14.068 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should fail substituting values in a volume subpath with backticks [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","total":-1,"completed":20,"skipped":446,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:33:05.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:293 [It] should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a replication controller Oct 23 01:33:05.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6049 create -f -' Oct 23 01:33:06.344: INFO: stderr: "" Oct 23 01:33:06.344: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Oct 23 01:33:06.344: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6049 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Oct 23 01:33:06.516: INFO: stderr: "" Oct 23 01:33:06.516: INFO: stdout: "update-demo-nautilus-bwttm update-demo-nautilus-n9wwb " Oct 23 01:33:06.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6049 get pods update-demo-nautilus-bwttm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Oct 23 01:33:06.675: INFO: stderr: "" Oct 23 01:33:06.675: INFO: stdout: "" Oct 23 01:33:06.675: INFO: update-demo-nautilus-bwttm is created but not running Oct 23 01:33:11.677: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6049 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Oct 23 01:33:11.838: INFO: stderr: "" Oct 23 01:33:11.838: INFO: stdout: "update-demo-nautilus-bwttm update-demo-nautilus-n9wwb " Oct 23 01:33:11.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6049 get pods update-demo-nautilus-bwttm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Oct 23 01:33:11.990: INFO: stderr: "" Oct 23 01:33:11.990: INFO: stdout: "true" Oct 23 01:33:11.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6049 get pods update-demo-nautilus-bwttm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Oct 23 01:33:12.165: INFO: stderr: "" Oct 23 01:33:12.165: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Oct 23 01:33:12.165: INFO: validating pod update-demo-nautilus-bwttm Oct 23 01:33:12.169: INFO: got data: { "image": "nautilus.jpg" } Oct 23 01:33:12.169: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Oct 23 01:33:12.169: INFO: update-demo-nautilus-bwttm is verified up and running Oct 23 01:33:12.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6049 get pods update-demo-nautilus-n9wwb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Oct 23 01:33:12.318: INFO: stderr: "" Oct 23 01:33:12.318: INFO: stdout: "true" Oct 23 01:33:12.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6049 get pods update-demo-nautilus-n9wwb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Oct 23 01:33:12.478: INFO: stderr: "" Oct 23 01:33:12.478: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Oct 23 01:33:12.478: INFO: validating pod update-demo-nautilus-n9wwb Oct 23 01:33:12.482: INFO: got data: { "image": "nautilus.jpg" } Oct 23 01:33:12.482: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Oct 23 01:33:12.482: INFO: update-demo-nautilus-n9wwb is verified up and running STEP: using delete to clean up resources Oct 23 01:33:12.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6049 delete --grace-period=0 --force -f -' Oct 23 01:33:12.602: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 23 01:33:12.602: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Oct 23 01:33:12.602: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6049 get rc,svc -l name=update-demo --no-headers' Oct 23 01:33:12.790: INFO: stderr: "No resources found in kubectl-6049 namespace.\n" Oct 23 01:33:12.790: INFO: stdout: "" Oct 23 01:33:12.790: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6049 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Oct 23 01:33:12.948: INFO: stderr: "" Oct 23 01:33:12.948: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:33:12.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6049" for this suite. • [SLOW TEST:7.018 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:291 should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":-1,"completed":21,"skipped":454,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:32:54.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 01:32:54.649: INFO: The status of Pod test-webserver-d65d144c-87a7-4e45-9c7e-b7bea2d1ccf3 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:32:56.654: INFO: The status of Pod test-webserver-d65d144c-87a7-4e45-9c7e-b7bea2d1ccf3 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:32:58.652: INFO: The status of Pod test-webserver-d65d144c-87a7-4e45-9c7e-b7bea2d1ccf3 is Running (Ready = false) Oct 23 01:33:00.652: INFO: The status of Pod test-webserver-d65d144c-87a7-4e45-9c7e-b7bea2d1ccf3 is Running (Ready = false) Oct 23 01:33:02.655: INFO: The status of Pod test-webserver-d65d144c-87a7-4e45-9c7e-b7bea2d1ccf3 is Running (Ready = false) Oct 23 01:33:04.653: INFO: The status of Pod test-webserver-d65d144c-87a7-4e45-9c7e-b7bea2d1ccf3 is Running (Ready = false) Oct 23 01:33:06.654: INFO: The status of Pod test-webserver-d65d144c-87a7-4e45-9c7e-b7bea2d1ccf3 is Running (Ready = false) Oct 23 01:33:08.653: INFO: The status of Pod test-webserver-d65d144c-87a7-4e45-9c7e-b7bea2d1ccf3 is Running (Ready = false) Oct 23 01:33:10.655: INFO: The status of Pod test-webserver-d65d144c-87a7-4e45-9c7e-b7bea2d1ccf3 is Running (Ready = false) Oct 23 01:33:12.654: INFO: The status of Pod test-webserver-d65d144c-87a7-4e45-9c7e-b7bea2d1ccf3 is Running (Ready = false) Oct 23 01:33:14.653: INFO: The status of Pod test-webserver-d65d144c-87a7-4e45-9c7e-b7bea2d1ccf3 is Running (Ready = false) Oct 23 01:33:16.655: INFO: The status of Pod test-webserver-d65d144c-87a7-4e45-9c7e-b7bea2d1ccf3 is Running (Ready = true) Oct 23 01:33:16.658: INFO: Container started at 2021-10-23 01:32:57 +0000 UTC, pod became ready at 2021-10-23 01:33:14 +0000 UTC [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:33:16.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4608" for this suite. • [SLOW TEST:22.054 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":133,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:33:16.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:33:16.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1327" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":-1,"completed":8,"skipped":159,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:32:53.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Oct 23 01:32:53.682: INFO: >>> kubeConfig: /root/.kube/config Oct 23 01:33:01.673: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:33:19.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9944" for this suite. • [SLOW TEST:25.611 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":-1,"completed":13,"skipped":215,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:32:11.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W1023 01:32:21.203412 29 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 01:33:23.219: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:33:23.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3837" for this suite. • [SLOW TEST:72.085 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":-1,"completed":13,"skipped":118,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:33:19.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 23 01:33:19.660: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 23 01:33:21.671: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549599, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549599, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549599, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549599, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 23 01:33:24.682: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:33:25.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4725" for this suite. STEP: Destroying namespace "webhook-4725-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.470 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":-1,"completed":14,"skipped":218,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:29:24.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod liveness-cfffd791-4501-462a-9e71-ec386380442f in namespace container-probe-8807 Oct 23 01:29:34.669: INFO: Started pod liveness-cfffd791-4501-462a-9e71-ec386380442f in namespace container-probe-8807 STEP: checking the pod's current state and verifying that restartCount is present Oct 23 01:29:34.671: INFO: Initial restart count of pod liveness-cfffd791-4501-462a-9e71-ec386380442f is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:33:35.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8807" for this suite. • [SLOW TEST:250.575 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":82,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:33:35.221: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 01:33:35.245: INFO: Creating deployment "test-recreate-deployment" Oct 23 01:33:35.248: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Oct 23 01:33:35.253: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Oct 23 01:33:37.259: INFO: Waiting deployment "test-recreate-deployment" to complete Oct 23 01:33:37.262: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549615, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549615, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549615, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549615, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6cb8b65c46\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 23 01:33:39.267: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Oct 23 01:33:39.275: INFO: Updating deployment test-recreate-deployment Oct 23 01:33:39.275: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Oct 23 01:33:39.315: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-1367 9a267010-d4c2-4d56-8519-95962dd57210 93017 2 2021-10-23 01:33:35 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-10-23 01:33:39 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-10-23 01:33:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc000ed8ef8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2021-10-23 01:33:39 +0000 UTC,LastTransitionTime:2021-10-23 01:33:39 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-85d47dcb4" is progressing.,LastUpdateTime:2021-10-23 01:33:39 +0000 UTC,LastTransitionTime:2021-10-23 01:33:35 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Oct 23 01:33:39.317: INFO: New ReplicaSet "test-recreate-deployment-85d47dcb4" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-85d47dcb4 deployment-1367 ee1caf47-fdfe-44ba-b314-7fc379f126f2 93016 1 2021-10-23 01:33:39 +0000 UTC map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 9a267010-d4c2-4d56-8519-95962dd57210 0xc000ed93d0 0xc000ed93d1}] [] [{kube-controller-manager Update apps/v1 2021-10-23 01:33:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9a267010-d4c2-4d56-8519-95962dd57210\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 85d47dcb4,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc000ed9448 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Oct 23 01:33:39.317: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Oct 23 01:33:39.318: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-6cb8b65c46 deployment-1367 4c044d8c-5710-4615-8c52-8668d602e8af 93005 2 2021-10-23 01:33:35 +0000 UTC map[name:sample-pod-3 pod-template-hash:6cb8b65c46] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 9a267010-d4c2-4d56-8519-95962dd57210 0xc000ed92d7 0xc000ed92d8}] [] [{kube-controller-manager Update apps/v1 2021-10-23 01:33:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9a267010-d4c2-4d56-8519-95962dd57210\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6cb8b65c46,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:6cb8b65c46] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc000ed9368 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Oct 23 01:33:39.321: INFO: Pod "test-recreate-deployment-85d47dcb4-gh4j7" is not available: &Pod{ObjectMeta:{test-recreate-deployment-85d47dcb4-gh4j7 test-recreate-deployment-85d47dcb4- deployment-1367 47c6066f-8c2d-4952-ae6e-ee2134172182 93018 0 2021-10-23 01:33:39 +0000 UTC map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-recreate-deployment-85d47dcb4 ee1caf47-fdfe-44ba-b314-7fc379f126f2 0xc000ed987f 0xc000ed9890}] [] [{kube-controller-manager Update v1 2021-10-23 01:33:39 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ee1caf47-fdfe-44ba-b314-7fc379f126f2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-10-23 01:33:39 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-wvnkh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wvnkh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:33:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:33:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:33:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:33:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:,StartTime:2021-10-23 01:33:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:33:39.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1367" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":6,"skipped":90,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:33:39.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Oct 23 01:33:39.371: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Oct 23 01:33:39.375: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Oct 23 01:33:39.375: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Oct 23 01:33:39.387: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Oct 23 01:33:39.387: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Oct 23 01:33:39.400: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Oct 23 01:33:39.400: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Oct 23 01:33:46.449: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:33:46.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-4418" for this suite. • [SLOW TEST:7.124 seconds] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":-1,"completed":7,"skipped":97,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:33:25.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod liveness-2a713b64-3a90-4e50-ad5a-cdd21c0bb5d9 in namespace container-probe-2779 Oct 23 01:33:29.859: INFO: Started pod liveness-2a713b64-3a90-4e50-ad5a-cdd21c0bb5d9 in namespace container-probe-2779 STEP: checking the pod's current state and verifying that restartCount is present Oct 23 01:33:29.862: INFO: Initial restart count of pod liveness-2a713b64-3a90-4e50-ad5a-cdd21c0bb5d9 is 0 Oct 23 01:33:47.895: INFO: Restart count of pod container-probe-2779/liveness-2a713b64-3a90-4e50-ad5a-cdd21c0bb5d9 is now 1 (18.033178684s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:33:47.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2779" for this suite. • [SLOW TEST:22.096 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":261,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:33:46.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 23 01:33:46.755: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created Oct 23 01:33:48.767: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549626, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549626, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549626, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549626, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 23 01:33:50.772: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549626, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549626, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549626, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549626, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 23 01:33:53.780: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:33:53.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8735" for this suite. STEP: Destroying namespace "webhook-8735-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.359 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":-1,"completed":8,"skipped":103,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:33:47.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Oct 23 01:33:47.953: INFO: Pod name pod-release: Found 0 pods out of 1 Oct 23 01:33:52.956: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:33:53.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5037" for this suite. • [SLOW TEST:6.053 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":-1,"completed":16,"skipped":264,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:33:53.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6972.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6972.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 23 01:34:00.042: INFO: DNS probes using dns-6972/dns-test-8366eaf1-6d26-4e92-afb6-1890aec574b4 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:34:00.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6972" for this suite. • [SLOW TEST:6.080 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":-1,"completed":17,"skipped":264,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:34:00.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-map-9fba0469-b537-41cb-a39e-e7f2495ff686 STEP: Creating a pod to test consume configMaps Oct 23 01:34:00.157: INFO: Waiting up to 5m0s for pod "pod-configmaps-a2ad4406-a9c7-4014-af3a-525936187ad3" in namespace "configmap-3135" to be "Succeeded or Failed" Oct 23 01:34:00.160: INFO: Pod "pod-configmaps-a2ad4406-a9c7-4014-af3a-525936187ad3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.367247ms Oct 23 01:34:02.186: INFO: Pod "pod-configmaps-a2ad4406-a9c7-4014-af3a-525936187ad3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029629132s Oct 23 01:34:04.193: INFO: Pod "pod-configmaps-a2ad4406-a9c7-4014-af3a-525936187ad3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036835376s STEP: Saw pod success Oct 23 01:34:04.194: INFO: Pod "pod-configmaps-a2ad4406-a9c7-4014-af3a-525936187ad3" satisfied condition "Succeeded or Failed" Oct 23 01:34:04.196: INFO: Trying to get logs from node node2 pod pod-configmaps-a2ad4406-a9c7-4014-af3a-525936187ad3 container agnhost-container: STEP: delete the pod Oct 23 01:34:04.216: INFO: Waiting for pod pod-configmaps-a2ad4406-a9c7-4014-af3a-525936187ad3 to disappear Oct 23 01:34:04.220: INFO: Pod pod-configmaps-a2ad4406-a9c7-4014-af3a-525936187ad3 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:34:04.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3135" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":300,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:33:12.977: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: waiting for pod running STEP: creating a file in subpath Oct 23 01:33:17.019: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-1461 PodName:var-expansion-40e1e046-3036-47d5-86b7-b43a97b10ff9 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 01:33:17.019: INFO: >>> kubeConfig: /root/.kube/config STEP: test for file in mounted path Oct 23 01:33:17.168: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-1461 PodName:var-expansion-40e1e046-3036-47d5-86b7-b43a97b10ff9 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 01:33:17.168: INFO: >>> kubeConfig: /root/.kube/config STEP: updating the annotation value Oct 23 01:33:17.855: INFO: Successfully updated pod "var-expansion-40e1e046-3036-47d5-86b7-b43a97b10ff9" STEP: waiting for annotated pod running STEP: deleting the pod gracefully Oct 23 01:33:17.858: INFO: Deleting pod "var-expansion-40e1e046-3036-47d5-86b7-b43a97b10ff9" in namespace "var-expansion-1461" Oct 23 01:33:17.862: INFO: Wait up to 5m0s for pod "var-expansion-40e1e046-3036-47d5-86b7-b43a97b10ff9" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:34:05.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1461" for this suite. • [SLOW TEST:52.901 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should succeed in writing subpaths in container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","total":-1,"completed":22,"skipped":466,"failed":0} SSS ------------------------------ [BeforeEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:34:05.887: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename discovery STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/discovery.go:39 STEP: Setting up server cert [It] should validate PreferredVersion for each APIGroup [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 01:34:06.311: INFO: Checking APIGroup: apiregistration.k8s.io Oct 23 01:34:06.312: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 Oct 23 01:34:06.312: INFO: Versions found [{apiregistration.k8s.io/v1 v1} {apiregistration.k8s.io/v1beta1 v1beta1}] Oct 23 01:34:06.312: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 Oct 23 01:34:06.312: INFO: Checking APIGroup: apps Oct 23 01:34:06.313: INFO: PreferredVersion.GroupVersion: apps/v1 Oct 23 01:34:06.313: INFO: Versions found [{apps/v1 v1}] Oct 23 01:34:06.313: INFO: apps/v1 matches apps/v1 Oct 23 01:34:06.313: INFO: Checking APIGroup: events.k8s.io Oct 23 01:34:06.313: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 Oct 23 01:34:06.313: INFO: Versions found [{events.k8s.io/v1 v1} {events.k8s.io/v1beta1 v1beta1}] Oct 23 01:34:06.313: INFO: events.k8s.io/v1 matches events.k8s.io/v1 Oct 23 01:34:06.313: INFO: Checking APIGroup: authentication.k8s.io Oct 23 01:34:06.315: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 Oct 23 01:34:06.315: INFO: Versions found [{authentication.k8s.io/v1 v1} {authentication.k8s.io/v1beta1 v1beta1}] Oct 23 01:34:06.315: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 Oct 23 01:34:06.315: INFO: Checking APIGroup: authorization.k8s.io Oct 23 01:34:06.315: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 Oct 23 01:34:06.315: INFO: Versions found [{authorization.k8s.io/v1 v1} {authorization.k8s.io/v1beta1 v1beta1}] Oct 23 01:34:06.315: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 Oct 23 01:34:06.315: INFO: Checking APIGroup: autoscaling Oct 23 01:34:06.316: INFO: PreferredVersion.GroupVersion: autoscaling/v1 Oct 23 01:34:06.316: INFO: Versions found [{autoscaling/v1 v1} {autoscaling/v2beta1 v2beta1} {autoscaling/v2beta2 v2beta2}] Oct 23 01:34:06.316: INFO: autoscaling/v1 matches autoscaling/v1 Oct 23 01:34:06.316: INFO: Checking APIGroup: batch Oct 23 01:34:06.317: INFO: PreferredVersion.GroupVersion: batch/v1 Oct 23 01:34:06.317: INFO: Versions found [{batch/v1 v1} {batch/v1beta1 v1beta1}] Oct 23 01:34:06.317: INFO: batch/v1 matches batch/v1 Oct 23 01:34:06.317: INFO: Checking APIGroup: certificates.k8s.io Oct 23 01:34:06.318: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 Oct 23 01:34:06.318: INFO: Versions found [{certificates.k8s.io/v1 v1} {certificates.k8s.io/v1beta1 v1beta1}] Oct 23 01:34:06.318: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 Oct 23 01:34:06.318: INFO: Checking APIGroup: networking.k8s.io Oct 23 01:34:06.319: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 Oct 23 01:34:06.319: INFO: Versions found [{networking.k8s.io/v1 v1} {networking.k8s.io/v1beta1 v1beta1}] Oct 23 01:34:06.319: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 Oct 23 01:34:06.319: INFO: Checking APIGroup: extensions Oct 23 01:34:06.319: INFO: PreferredVersion.GroupVersion: extensions/v1beta1 Oct 23 01:34:06.319: INFO: Versions found [{extensions/v1beta1 v1beta1}] Oct 23 01:34:06.319: INFO: extensions/v1beta1 matches extensions/v1beta1 Oct 23 01:34:06.319: INFO: Checking APIGroup: policy Oct 23 01:34:06.320: INFO: PreferredVersion.GroupVersion: policy/v1 Oct 23 01:34:06.320: INFO: Versions found [{policy/v1 v1} {policy/v1beta1 v1beta1}] Oct 23 01:34:06.320: INFO: policy/v1 matches policy/v1 Oct 23 01:34:06.320: INFO: Checking APIGroup: rbac.authorization.k8s.io Oct 23 01:34:06.321: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 Oct 23 01:34:06.321: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1} {rbac.authorization.k8s.io/v1beta1 v1beta1}] Oct 23 01:34:06.321: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 Oct 23 01:34:06.321: INFO: Checking APIGroup: storage.k8s.io Oct 23 01:34:06.322: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 Oct 23 01:34:06.322: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] Oct 23 01:34:06.322: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 Oct 23 01:34:06.322: INFO: Checking APIGroup: admissionregistration.k8s.io Oct 23 01:34:06.323: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 Oct 23 01:34:06.323: INFO: Versions found [{admissionregistration.k8s.io/v1 v1} {admissionregistration.k8s.io/v1beta1 v1beta1}] Oct 23 01:34:06.323: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 Oct 23 01:34:06.323: INFO: Checking APIGroup: apiextensions.k8s.io Oct 23 01:34:06.323: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 Oct 23 01:34:06.323: INFO: Versions found [{apiextensions.k8s.io/v1 v1} {apiextensions.k8s.io/v1beta1 v1beta1}] Oct 23 01:34:06.323: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 Oct 23 01:34:06.323: INFO: Checking APIGroup: scheduling.k8s.io Oct 23 01:34:06.324: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 Oct 23 01:34:06.324: INFO: Versions found [{scheduling.k8s.io/v1 v1} {scheduling.k8s.io/v1beta1 v1beta1}] Oct 23 01:34:06.324: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 Oct 23 01:34:06.324: INFO: Checking APIGroup: coordination.k8s.io Oct 23 01:34:06.324: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 Oct 23 01:34:06.324: INFO: Versions found [{coordination.k8s.io/v1 v1} {coordination.k8s.io/v1beta1 v1beta1}] Oct 23 01:34:06.324: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 Oct 23 01:34:06.325: INFO: Checking APIGroup: node.k8s.io Oct 23 01:34:06.325: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1 Oct 23 01:34:06.325: INFO: Versions found [{node.k8s.io/v1 v1} {node.k8s.io/v1beta1 v1beta1}] Oct 23 01:34:06.325: INFO: node.k8s.io/v1 matches node.k8s.io/v1 Oct 23 01:34:06.325: INFO: Checking APIGroup: discovery.k8s.io Oct 23 01:34:06.326: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1 Oct 23 01:34:06.326: INFO: Versions found [{discovery.k8s.io/v1 v1} {discovery.k8s.io/v1beta1 v1beta1}] Oct 23 01:34:06.326: INFO: discovery.k8s.io/v1 matches discovery.k8s.io/v1 Oct 23 01:34:06.326: INFO: Checking APIGroup: flowcontrol.apiserver.k8s.io Oct 23 01:34:06.326: INFO: PreferredVersion.GroupVersion: flowcontrol.apiserver.k8s.io/v1beta1 Oct 23 01:34:06.326: INFO: Versions found [{flowcontrol.apiserver.k8s.io/v1beta1 v1beta1}] Oct 23 01:34:06.326: INFO: flowcontrol.apiserver.k8s.io/v1beta1 matches flowcontrol.apiserver.k8s.io/v1beta1 Oct 23 01:34:06.326: INFO: Checking APIGroup: intel.com Oct 23 01:34:06.327: INFO: PreferredVersion.GroupVersion: intel.com/v1 Oct 23 01:34:06.327: INFO: Versions found [{intel.com/v1 v1}] Oct 23 01:34:06.327: INFO: intel.com/v1 matches intel.com/v1 Oct 23 01:34:06.327: INFO: Checking APIGroup: k8s.cni.cncf.io Oct 23 01:34:06.327: INFO: PreferredVersion.GroupVersion: k8s.cni.cncf.io/v1 Oct 23 01:34:06.327: INFO: Versions found [{k8s.cni.cncf.io/v1 v1}] Oct 23 01:34:06.327: INFO: k8s.cni.cncf.io/v1 matches k8s.cni.cncf.io/v1 Oct 23 01:34:06.327: INFO: Checking APIGroup: monitoring.coreos.com Oct 23 01:34:06.328: INFO: PreferredVersion.GroupVersion: monitoring.coreos.com/v1 Oct 23 01:34:06.328: INFO: Versions found [{monitoring.coreos.com/v1 v1} {monitoring.coreos.com/v1alpha1 v1alpha1}] Oct 23 01:34:06.328: INFO: monitoring.coreos.com/v1 matches monitoring.coreos.com/v1 Oct 23 01:34:06.328: INFO: Checking APIGroup: telemetry.intel.com Oct 23 01:34:06.328: INFO: PreferredVersion.GroupVersion: telemetry.intel.com/v1alpha1 Oct 23 01:34:06.328: INFO: Versions found [{telemetry.intel.com/v1alpha1 v1alpha1}] Oct 23 01:34:06.328: INFO: telemetry.intel.com/v1alpha1 matches telemetry.intel.com/v1alpha1 Oct 23 01:34:06.328: INFO: Checking APIGroup: custom.metrics.k8s.io Oct 23 01:34:06.329: INFO: PreferredVersion.GroupVersion: custom.metrics.k8s.io/v1beta1 Oct 23 01:34:06.329: INFO: Versions found [{custom.metrics.k8s.io/v1beta1 v1beta1}] Oct 23 01:34:06.329: INFO: custom.metrics.k8s.io/v1beta1 matches custom.metrics.k8s.io/v1beta1 [AfterEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:34:06.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "discovery-1420" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":-1,"completed":23,"skipped":469,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:31:27.200: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-8596 [It] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a new StatefulSet Oct 23 01:31:27.228: INFO: Found 0 stateful pods, waiting for 3 Oct 23 01:31:37.235: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Oct 23 01:31:37.235: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Oct 23 01:31:37.235: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Oct 23 01:31:47.233: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Oct 23 01:31:47.233: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Oct 23 01:31:47.233: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Oct 23 01:31:47.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-8596 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 23 01:31:47.471: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Oct 23 01:31:47.471: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 23 01:31:47.471: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-1 Oct 23 01:31:57.502: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Oct 23 01:32:07.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-8596 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 01:32:07.954: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Oct 23 01:32:07.954: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 23 01:32:07.954: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 23 01:32:17.970: INFO: Waiting for StatefulSet statefulset-8596/ss2 to complete update Oct 23 01:32:17.970: INFO: Waiting for Pod statefulset-8596/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Oct 23 01:32:17.970: INFO: Waiting for Pod statefulset-8596/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Oct 23 01:32:17.970: INFO: Waiting for Pod statefulset-8596/ss2-2 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Oct 23 01:32:27.977: INFO: Waiting for StatefulSet statefulset-8596/ss2 to complete update Oct 23 01:32:27.977: INFO: Waiting for Pod statefulset-8596/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Oct 23 01:32:27.977: INFO: Waiting for Pod statefulset-8596/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Oct 23 01:32:37.980: INFO: Waiting for StatefulSet statefulset-8596/ss2 to complete update Oct 23 01:32:37.980: INFO: Waiting for Pod statefulset-8596/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 STEP: Rolling back to a previous revision Oct 23 01:32:47.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-8596 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 23 01:32:48.342: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Oct 23 01:32:48.342: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 23 01:32:48.342: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 23 01:32:58.373: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Oct 23 01:33:08.388: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-8596 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 01:33:08.648: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Oct 23 01:33:08.648: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 23 01:33:08.648: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 23 01:33:28.665: INFO: Waiting for StatefulSet statefulset-8596/ss2 to complete update Oct 23 01:33:28.665: INFO: Waiting for Pod statefulset-8596/ss2-0 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Oct 23 01:33:38.672: INFO: Deleting all statefulset in ns statefulset-8596 Oct 23 01:33:38.674: INFO: Scaling statefulset ss2 to 0 Oct 23 01:34:08.688: INFO: Waiting for statefulset status.replicas updated to 0 Oct 23 01:34:08.691: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:34:08.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8596" for this suite. • [SLOW TEST:161.508 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:29:07.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-7548 [It] Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-7548 STEP: Creating statefulset with conflicting port in namespace statefulset-7548 STEP: Waiting until pod test-pod will start running in namespace statefulset-7548 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-7548 Oct 23 01:34:11.371: FAIL: Pod ss-0 expected to be re-created at least once Full Stack Trace k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001a80780) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc001a80780) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc001a80780, 0x70e7b58) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Oct 23 01:34:11.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7548 describe po test-pod' Oct 23 01:34:11.550: INFO: stderr: "" Oct 23 01:34:11.550: INFO: stdout: "Name: test-pod\nNamespace: statefulset-7548\nPriority: 0\nNode: node1/10.10.190.207\nStart Time: Sat, 23 Oct 2021 01:29:07 +0000\nLabels: \nAnnotations: k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.186\"\n ],\n \"mac\": \"ba:bb:d0:2c:de:51\",\n \"default\": true,\n \"dns\": {}\n }]\n k8s.v1.cni.cncf.io/networks-status:\n [{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.186\"\n ],\n \"mac\": \"ba:bb:d0:2c:de:51\",\n \"default\": true,\n \"dns\": {}\n }]\n kubernetes.io/psp: privileged\nStatus: Running\nIP: 10.244.3.186\nIPs:\n IP: 10.244.3.186\nContainers:\n webserver:\n Container ID: docker://0fc26b0ee7bc22346f659d497e08cfc2eeef07001cea09c2ae80f208fef8a685\n Image: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\n Image ID: docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50\n Port: 21017/TCP\n Host Port: 21017/TCP\n State: Running\n Started: Sat, 23 Oct 2021 01:29:09 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pkr9m (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-pkr9m:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Pulling 5m2s kubelet Pulling image \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\"\n Normal Pulled 5m2s kubelet Successfully pulled image \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\" in 291.238975ms\n Normal Created 5m2s kubelet Created container webserver\n Normal Started 5m2s kubelet Started container webserver\n" Oct 23 01:34:11.550: INFO: Output of kubectl describe test-pod: Name: test-pod Namespace: statefulset-7548 Priority: 0 Node: node1/10.10.190.207 Start Time: Sat, 23 Oct 2021 01:29:07 +0000 Labels: Annotations: k8s.v1.cni.cncf.io/network-status: [{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.186" ], "mac": "ba:bb:d0:2c:de:51", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: [{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.186" ], "mac": "ba:bb:d0:2c:de:51", "default": true, "dns": {} }] kubernetes.io/psp: privileged Status: Running IP: 10.244.3.186 IPs: IP: 10.244.3.186 Containers: webserver: Container ID: docker://0fc26b0ee7bc22346f659d497e08cfc2eeef07001cea09c2ae80f208fef8a685 Image: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Image ID: docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 Port: 21017/TCP Host Port: 21017/TCP State: Running Started: Sat, 23 Oct 2021 01:29:09 +0000 Ready: True Restart Count: 0 Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pkr9m (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-pkr9m: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Pulling 5m2s kubelet Pulling image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" Normal Pulled 5m2s kubelet Successfully pulled image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" in 291.238975ms Normal Created 5m2s kubelet Created container webserver Normal Started 5m2s kubelet Started container webserver Oct 23 01:34:11.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7548 logs test-pod --tail=100' Oct 23 01:34:11.733: INFO: stderr: "" Oct 23 01:34:11.733: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.3.186. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.3.186. Set the 'ServerName' directive globally to suppress this message\n[Sat Oct 23 01:29:09.820935 2021] [mpm_event:notice] [pid 1:tid 139672167480168] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Sat Oct 23 01:29:09.820969 2021] [core:notice] [pid 1:tid 139672167480168] AH00094: Command line: 'httpd -D FOREGROUND'\n" Oct 23 01:34:11.733: INFO: Last 100 log lines of test-pod: AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.3.186. Set the 'ServerName' directive globally to suppress this message AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.3.186. Set the 'ServerName' directive globally to suppress this message [Sat Oct 23 01:29:09.820935 2021] [mpm_event:notice] [pid 1:tid 139672167480168] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations [Sat Oct 23 01:29:09.820969 2021] [core:notice] [pid 1:tid 139672167480168] AH00094: Command line: 'httpd -D FOREGROUND' Oct 23 01:34:11.733: INFO: Deleting all statefulset in ns statefulset-7548 Oct 23 01:34:11.735: INFO: Scaling statefulset ss to 0 Oct 23 01:34:11.744: INFO: Waiting for statefulset status.replicas updated to 0 Oct 23 01:34:11.746: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "statefulset-7548". STEP: Found 7 events. Oct 23 01:34:11.758: INFO: At 2021-10-23 01:29:07 +0000 UTC - event for ss: {statefulset-controller } FailedCreate: create Pod ss-0 in StatefulSet ss failed error: pods "ss-0" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9100] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9103-9104]] Oct 23 01:34:11.758: INFO: At 2021-10-23 01:29:07 +0000 UTC - event for ss: {statefulset-controller } FailedCreate: create Pod ss-0 in StatefulSet ss failed error: pods "ss-0" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9103-9104] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9100] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: []] Oct 23 01:34:11.758: INFO: At 2021-10-23 01:29:07 +0000 UTC - event for ss: {statefulset-controller } FailedCreate: create Pod ss-0 in StatefulSet ss failed error: pods "ss-0" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9103-9104] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9100]] Oct 23 01:34:11.758: INFO: At 2021-10-23 01:29:09 +0000 UTC - event for test-pod: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" Oct 23 01:34:11.758: INFO: At 2021-10-23 01:29:09 +0000 UTC - event for test-pod: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" in 291.238975ms Oct 23 01:34:11.758: INFO: At 2021-10-23 01:29:09 +0000 UTC - event for test-pod: {kubelet node1} Created: Created container webserver Oct 23 01:34:11.758: INFO: At 2021-10-23 01:29:09 +0000 UTC - event for test-pod: {kubelet node1} Started: Started container webserver Oct 23 01:34:11.760: INFO: POD NODE PHASE GRACE CONDITIONS Oct 23 01:34:11.760: INFO: test-pod node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:29:07 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:29:09 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:29:09 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:29:07 +0000 UTC }] Oct 23 01:34:11.760: INFO: Oct 23 01:34:11.764: INFO: Logging node info for node master1 Oct 23 01:34:11.766: INFO: Node Info: &Node{ObjectMeta:{master1 1b0e9b6c-fa73-4303-880f-3c662903b3ba 93614 0 2021-10-22 21:03:37 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-22 21:03:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2021-10-22 21:03:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2021-10-22 21:06:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2021-10-22 21:11:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:07 +0000 UTC,LastTransitionTime:2021-10-22 21:09:07 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 01:34:08 +0000 UTC,LastTransitionTime:2021-10-22 21:03:34 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 01:34:08 +0000 UTC,LastTransitionTime:2021-10-22 21:03:34 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 01:34:08 +0000 UTC,LastTransitionTime:2021-10-22 21:03:34 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 01:34:08 +0000 UTC,LastTransitionTime:2021-10-22 21:09:03 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:30ce143f9c9243b59253027a77cdbf77,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:e78651c4-73ca-42e7-8083-bc7c7ebac4b6,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:519ce66d3ef90d7545f5679b670aa50393adfbe9785a720ba26ce3ec4b263c5d tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:2012644549052fa07c43b0d19f320c871a25e105d0b23e33645e4f1bcf8fcd97 nginx:1.20.1-alpine],SizeBytes:22650454,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 01:34:11.767: INFO: Logging kubelet events for node master1 Oct 23 01:34:11.769: INFO: Logging pods the kubelet thinks is on node master1 Oct 23 01:34:11.795: INFO: coredns-8474476ff8-q8d8x started at 2021-10-22 21:07:01 +0000 UTC (0+1 container statuses recorded) Oct 23 01:34:11.795: INFO: Container coredns ready: true, restart count 2 Oct 23 01:34:11.795: INFO: container-registry-65d7c44b96-wtz5j started at 2021-10-22 21:10:37 +0000 UTC (0+2 container statuses recorded) Oct 23 01:34:11.795: INFO: Container docker-registry ready: true, restart count 0 Oct 23 01:34:11.795: INFO: Container nginx ready: true, restart count 0 Oct 23 01:34:11.795: INFO: node-exporter-fxb7q started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 01:34:11.795: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 01:34:11.795: INFO: Container node-exporter ready: true, restart count 0 Oct 23 01:34:11.795: INFO: kube-apiserver-master1 started at 2021-10-22 21:09:03 +0000 UTC (0+1 container statuses recorded) Oct 23 01:34:11.795: INFO: Container kube-apiserver ready: true, restart count 0 Oct 23 01:34:11.795: INFO: kube-controller-manager-master1 started at 2021-10-22 21:12:54 +0000 UTC (0+1 container statuses recorded) Oct 23 01:34:11.795: INFO: Container kube-controller-manager ready: true, restart count 1 Oct 23 01:34:11.795: INFO: kube-proxy-fhqkt started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 01:34:11.795: INFO: Container kube-proxy ready: true, restart count 1 Oct 23 01:34:11.795: INFO: kube-flannel-8vnf2 started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 01:34:11.795: INFO: Init container install-cni ready: true, restart count 1 Oct 23 01:34:11.795: INFO: Container kube-flannel ready: true, restart count 1 Oct 23 01:34:11.795: INFO: kube-multus-ds-amd64-vl8qj started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 01:34:11.795: INFO: Container kube-multus ready: true, restart count 1 Oct 23 01:34:11.795: INFO: kube-scheduler-master1 started at 2021-10-22 21:22:33 +0000 UTC (0+1 container statuses recorded) Oct 23 01:34:11.795: INFO: Container kube-scheduler ready: true, restart count 0 W1023 01:34:11.809959 28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 01:34:11.877: INFO: Latency metrics for node master1 Oct 23 01:34:11.877: INFO: Logging node info for node master2 Oct 23 01:34:11.879: INFO: Node Info: &Node{ObjectMeta:{master2 48070097-b11c-473d-9240-f4ee02bd7e2f 93476 0 2021-10-22 21:04:08 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-22 21:04:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-10-22 21:17:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:14 +0000 UTC,LastTransitionTime:2021-10-22 21:09:14 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 01:34:02 +0000 UTC,LastTransitionTime:2021-10-22 21:04:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 01:34:02 +0000 UTC,LastTransitionTime:2021-10-22 21:04:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 01:34:02 +0000 UTC,LastTransitionTime:2021-10-22 21:04:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 01:34:02 +0000 UTC,LastTransitionTime:2021-10-22 21:06:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c5d510cf1060448cb87a1d02cd1f2972,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:8ec7c43d-60d2-4abb-84a1-5a37f3283118,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 01:34:11.880: INFO: Logging kubelet events for node master2 Oct 23 01:34:11.883: INFO: Logging pods the kubelet thinks is on node master2 Oct 23 01:34:11.896: INFO: dns-autoscaler-7df78bfcfb-9ss69 started at 2021-10-22 21:06:58 +0000 UTC (0+1 container statuses recorded) Oct 23 01:34:11.896: INFO: Container autoscaler ready: true, restart count 1 Oct 23 01:34:11.896: INFO: node-exporter-vljkh started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 01:34:11.896: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 01:34:11.896: INFO: Container node-exporter ready: true, restart count 0 Oct 23 01:34:11.896: INFO: kube-apiserver-master2 started at 2021-10-22 21:04:46 +0000 UTC (0+1 container statuses recorded) Oct 23 01:34:11.896: INFO: Container kube-apiserver ready: true, restart count 0 Oct 23 01:34:11.896: INFO: kube-scheduler-master2 started at 2021-10-22 21:12:54 +0000 UTC (0+1 container statuses recorded) Oct 23 01:34:11.896: INFO: Container kube-scheduler ready: true, restart count 2 Oct 23 01:34:11.896: INFO: kube-proxy-2xlf2 started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 01:34:11.896: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 01:34:11.896: INFO: kube-flannel-tfkj9 started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 01:34:11.896: INFO: Init container install-cni ready: true, restart count 2 Oct 23 01:34:11.896: INFO: Container kube-flannel ready: true, restart count 1 Oct 23 01:34:11.896: INFO: kube-multus-ds-amd64-m8ztc started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 01:34:11.896: INFO: Container kube-multus ready: true, restart count 1 Oct 23 01:34:11.896: INFO: kube-controller-manager-master2 started at 2021-10-22 21:12:54 +0000 UTC (0+1 container statuses recorded) Oct 23 01:34:11.896: INFO: Container kube-controller-manager ready: true, restart count 2 W1023 01:34:11.912719 28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 01:34:11.987: INFO: Latency metrics for node master2 Oct 23 01:34:11.987: INFO: Logging node info for node master3 Oct 23 01:34:11.989: INFO: Node Info: &Node{ObjectMeta:{master3 fe22a467-e2de-4b64-9399-d274e6d13231 93585 0 2021-10-22 21:04:18 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-22 21:04:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-10-22 21:14:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2021-10-22 21:14:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:08 +0000 UTC,LastTransitionTime:2021-10-22 21:09:08 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 01:34:07 +0000 UTC,LastTransitionTime:2021-10-22 21:04:18 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 01:34:07 +0000 UTC,LastTransitionTime:2021-10-22 21:04:18 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 01:34:07 +0000 UTC,LastTransitionTime:2021-10-22 21:04:18 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 01:34:07 +0000 UTC,LastTransitionTime:2021-10-22 21:09:03 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:55ed55d7ecb94c5fbcecb32cb3747801,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:7e00baa8-f631-4d7e-baa1-cb915fbb1ea7,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 01:34:11.989: INFO: Logging kubelet events for node master3 Oct 23 01:34:11.992: INFO: Logging pods the kubelet thinks is on node master3 Oct 23 01:34:12.006: INFO: kube-apiserver-master3 started at 2021-10-22 21:09:03 +0000 UTC (0+1 container statuses recorded) Oct 23 01:34:12.006: INFO: Container kube-apiserver ready: true, restart count 0 Oct 23 01:34:12.006: INFO: kube-controller-manager-master3 started at 2021-10-22 21:09:03 +0000 UTC (0+1 container statuses recorded) Oct 23 01:34:12.006: INFO: Container kube-controller-manager ready: true, restart count 2 Oct 23 01:34:12.006: INFO: kube-proxy-l7st4 started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 01:34:12.006: INFO: Container kube-proxy ready: true, restart count 1 Oct 23 01:34:12.006: INFO: kube-flannel-rf9mv started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 01:34:12.006: INFO: Init container install-cni ready: true, restart count 1 Oct 23 01:34:12.006: INFO: Container kube-flannel ready: true, restart count 1 Oct 23 01:34:12.006: INFO: node-feature-discovery-controller-cff799f9f-dgsfd started at 2021-10-22 21:14:11 +0000 UTC (0+1 container statuses recorded) Oct 23 01:34:12.006: INFO: Container nfd-controller ready: true, restart count 0 Oct 23 01:34:12.006: INFO: node-exporter-b22mw started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 01:34:12.006: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 01:34:12.006: INFO: Container node-exporter ready: true, restart count 0 Oct 23 01:34:12.006: INFO: kube-scheduler-master3 started at 2021-10-22 21:04:46 +0000 UTC (0+1 container statuses recorded) Oct 23 01:34:12.006: INFO: Container kube-scheduler ready: true, restart count 2 Oct 23 01:34:12.006: INFO: kube-multus-ds-amd64-tfbmd started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 01:34:12.006: INFO: Container kube-multus ready: true, restart count 1 Oct 23 01:34:12.006: INFO: coredns-8474476ff8-7wlfp started at 2021-10-22 21:06:56 +0000 UTC (0+1 container statuses recorded) Oct 23 01:34:12.006: INFO: Container coredns ready: true, restart count 2 W1023 01:34:12.021397 28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 01:34:12.094: INFO: Latency metrics for node master3 Oct 23 01:34:12.094: INFO: Logging node info for node node1 Oct 23 01:34:12.097: INFO: Node Info: &Node{ObjectMeta:{node1 1c590bf6-8845-4681-8fa1-7acc55183d29 93487 0 2021-10-22 21:05:23 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-10-22 21:05:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-10-22 21:05:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-22 21:14:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-22 21:17:46 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-10-23 01:19:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:10 +0000 UTC,LastTransitionTime:2021-10-22 21:09:10 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 01:34:03 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 01:34:03 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 01:34:03 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 01:34:03 +0000 UTC,LastTransitionTime:2021-10-22 21:06:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f11a4b4c58ac4a4eb06ac043edeefa84,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:50e64d70-ffd2-496a-957a-81f1931a6b6e,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003429679,},ContainerImage{Names:[localhost:30500/cmk@sha256:ba2eda55192ece5488254511709b932e8a99f600af8261a9f2a89d0dbc9b8fec cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:c3256608afd18299ac7559d97ec0a80149d265b35d2eeeb33a053826e486886a nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:a296b4c6f6ee2b88f095b61e95c7ef4f51ba25598835b4978c9256d8c8ace48a alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 01:34:12.098: INFO: Logging kubelet events for node node1 Oct 23 01:34:12.100: INFO: Logging pods the kubelet thinks is on node node1 Oct 23 01:34:12.117: INFO: collectd-n9sbv started at 2021-10-22 21:23:20 +0000 UTC (0+3 container statuses recorded) Oct 23 01:34:12.117: INFO: Container collectd ready: true, restart count 0 Oct 23 01:34:12.117: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 01:34:12.117: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 01:34:12.117: INFO: affinity-clusterip-timeout-2s4ln started at 2021-10-23 01:32:46 +0000 UTC (0+1 container statuses recorded) Oct 23 01:34:12.117: INFO: Container affinity-clusterip-timeout ready: true, restart count 0 Oct 23 01:34:12.117: INFO: kubernetes-dashboard-785dcbb76d-kc4kh started at 2021-10-22 21:07:01 +0000 UTC (0+1 container statuses recorded) Oct 23 01:34:12.117: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 23 01:34:12.117: INFO: prometheus-k8s-0 started at 2021-10-22 21:19:48 +0000 UTC (0+4 container statuses recorded) Oct 23 01:34:12.117: INFO: Container config-reloader ready: true, restart count 0 Oct 23 01:34:12.117: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 23 01:34:12.117: INFO: Container grafana ready: true, restart count 0 Oct 23 01:34:12.117: INFO: Container prometheus ready: true, restart count 1 Oct 23 01:34:12.117: INFO: prometheus-operator-585ccfb458-hwjk2 started at 2021-10-22 21:19:21 +0000 UTC (0+2 container statuses recorded) Oct 23 01:34:12.117: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 01:34:12.117: INFO: Container prometheus-operator ready: true, restart count 0 Oct 23 01:34:12.117: INFO: test-pod started at 2021-10-23 01:29:07 +0000 UTC (0+1 container statuses recorded) Oct 23 01:34:12.117: INFO: Container webserver ready: true, restart count 0 Oct 23 01:34:12.117: INFO: affinity-clusterip-transition-xfj5l started at 2021-10-23 01:33:53 +0000 UTC (0+1 container statuses recorded) Oct 23 01:34:12.117: INFO: Container affinity-clusterip-transition ready: true, restart count 0 Oct 23 01:34:12.117: INFO: var-expansion-9cb674eb-d5dc-4f84-9ce2-f7a33e33d32a started at 2021-10-23 01:31:57 +0000 UTC (0+1 container statuses recorded) Oct 23 01:34:12.117: INFO: Container dapi-container ready: true, restart count 0 Oct 23 01:34:12.117: INFO: affinity-nodeport-transition-srdpr started at 2021-10-23 01:33:16 +0000 UTC (0+1 container statuses recorded) Oct 23 01:34:12.117: INFO: Container affinity-nodeport-transition ready: true, restart count 0 Oct 23 01:34:12.117: INFO: node-feature-discovery-worker-2pvq5 started at 2021-10-22 21:14:11 +0000 UTC (0+1 container statuses recorded) Oct 23 01:34:12.118: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 01:34:12.118: INFO: kube-proxy-m9z8s started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 01:34:12.118: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 01:34:12.118: INFO: kube-multus-ds-amd64-l97s4 started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 01:34:12.118: INFO: Container kube-multus ready: true, restart count 1 Oct 23 01:34:12.118: INFO: sample-crd-conversion-webhook-deployment-697cdbd8f4-k84tl started at 2021-10-23 01:34:06 +0000 UTC (0+1 container statuses recorded) Oct 23 01:34:12.118: INFO: Container sample-crd-conversion-webhook ready: true, restart count 0 Oct 23 01:34:12.118: INFO: cmk-init-discover-node1-c599w started at 2021-10-22 21:17:43 +0000 UTC (0+3 container statuses recorded) Oct 23 01:34:12.118: INFO: Container discover ready: false, restart count 0 Oct 23 01:34:12.118: INFO: Container init ready: false, restart count 0 Oct 23 01:34:12.118: INFO: Container install ready: false, restart count 0 Oct 23 01:34:12.118: INFO: cmk-t9r2t started at 2021-10-22 21:18:25 +0000 UTC (0+2 container statuses recorded) Oct 23 01:34:12.118: INFO: Container nodereport ready: true, restart count 0 Oct 23 01:34:12.118: INFO: Container reconcile ready: true, restart count 0 Oct 23 01:34:12.118: INFO: nginx-proxy-node1 started at 2021-10-22 21:05:23 +0000 UTC (0+1 container statuses recorded) Oct 23 01:34:12.118: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 01:34:12.118: INFO: node-exporter-v656r started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 01:34:12.118: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 01:34:12.118: INFO: Container node-exporter ready: true, restart count 0 Oct 23 01:34:12.118: INFO: affinity-clusterip-timeout-vtrdv started at 2021-10-23 01:32:46 +0000 UTC (0+1 container statuses recorded) Oct 23 01:34:12.118: INFO: Container affinity-clusterip-timeout ready: true, restart count 0 Oct 23 01:34:12.118: INFO: kube-flannel-2cdvd started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 01:34:12.118: INFO: Init container install-cni ready: true, restart count 2 Oct 23 01:34:12.118: INFO: Container kube-flannel ready: true, restart count 3 Oct 23 01:34:12.118: INFO: kubernetes-metrics-scraper-5558854cb-dfn2n started at 2021-10-22 21:07:01 +0000 UTC (0+1 container statuses recorded) Oct 23 01:34:12.118: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 23 01:34:12.118: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-sjjtd started at 2021-10-22 21:15:26 +0000 UTC (0+1 container statuses recorded) Oct 23 01:34:12.118: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 01:34:12.118: INFO: affinity-clusterip-transition-x558j started at 2021-10-23 01:33:53 +0000 UTC (0+1 container statuses recorded) Oct 23 01:34:12.118: INFO: Container affinity-clusterip-transition ready: true, restart count 0 W1023 01:34:12.132904 28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 01:34:12.323: INFO: Latency metrics for node node1 Oct 23 01:34:12.323: INFO: Logging node info for node node2 Oct 23 01:34:12.326: INFO: Node Info: &Node{ObjectMeta:{node2 bdba54c1-d4eb-4c09-a343-50f320ccb048 93483 0 2021-10-22 21:05:23 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-10-22 21:05:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-10-22 21:05:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-22 21:14:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-22 21:18:08 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-10-23 01:20:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:example.com/fakecpu":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {e2e.test Update v1 2021-10-23 01:28:00 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:08 +0000 UTC,LastTransitionTime:2021-10-22 21:09:08 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 01:34:03 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 01:34:03 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 01:34:03 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 01:34:03 +0000 UTC,LastTransitionTime:2021-10-22 21:06:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:82312646736a4d47a5e2182417308818,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:045f38e2-ca45-4931-a8ac-a14f5e34cbd2,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[localhost:30500/cmk@sha256:ba2eda55192ece5488254511709b932e8a99f600af8261a9f2a89d0dbc9b8fec localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:c3256608afd18299ac7559d97ec0a80149d265b35d2eeeb33a053826e486886a localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[localhost:30500/tasextender@sha256:519ce66d3ef90d7545f5679b670aa50393adfbe9785a720ba26ce3ec4b263c5d localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 01:34:12.326: INFO: Logging kubelet events for node node2 Oct 23 01:34:12.328: INFO: Logging pods the kubelet thinks is on node node2 Oct 23 01:34:12.344: INFO: cmk-kn29k started at 2021-10-22 21:18:25 +0000 UTC (0+2 container statuses recorded) Oct 23 01:34:12.344: INFO: Container nodereport ready: true, restart count 1 Oct 23 01:34:12.344: INFO: Container reconcile ready: true, restart count 0 Oct 23 01:34:12.345: INFO: execpod-affinityq59m4 started at 2021-10-23 01:33:25 +0000 UTC (0+1 container statuses recorded) Oct 23 01:34:12.345: INFO: Container agnhost-container ready: true, restart count 0 Oct 23 01:34:12.345: INFO: execpod-affinitydx6vj started at 2021-10-23 01:32:52 +0000 UTC (0+1 container statuses recorded) Oct 23 01:34:12.345: INFO: Container agnhost-container ready: true, restart count 0 Oct 23 01:34:12.345: INFO: test-webserver-2f59f410-c7c2-4ba2-a4d8-0726690d56e6 started at 2021-10-23 01:31:33 +0000 UTC (0+1 container statuses recorded) Oct 23 01:34:12.345: INFO: Container test-webserver ready: true, restart count 0 Oct 23 01:34:12.345: INFO: affinity-nodeport-transition-9swz2 started at 2021-10-23 01:33:16 +0000 UTC (0+1 container statuses recorded) Oct 23 01:34:12.345: INFO: Container affinity-nodeport-transition ready: true, restart count 0 Oct 23 01:34:12.345: INFO: kube-proxy-5h2bl started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 01:34:12.345: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 01:34:12.345: INFO: cmk-init-discover-node2-2btnq started at 2021-10-22 21:18:03 +0000 UTC (0+3 container statuses recorded) Oct 23 01:34:12.345: INFO: Container discover ready: false, restart count 0 Oct 23 01:34:12.345: INFO: Container init ready: false, restart count 0 Oct 23 01:34:12.345: INFO: Container install ready: false, restart count 0 Oct 23 01:34:12.345: INFO: cmk-webhook-6c9d5f8578-pkwhc started at 2021-10-22 21:18:26 +0000 UTC (0+1 container statuses recorded) Oct 23 01:34:12.345: INFO: Container cmk-webhook ready: true, restart count 0 Oct 23 01:34:12.345: INFO: kube-multus-ds-amd64-fww5b started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 01:34:12.345: INFO: Container kube-multus ready: true, restart count 1 Oct 23 01:34:12.345: INFO: node-feature-discovery-worker-8k8m5 started at 2021-10-22 21:14:11 +0000 UTC (0+1 container statuses recorded) Oct 23 01:34:12.345: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 01:34:12.345: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zhcfq started at 2021-10-22 21:15:26 +0000 UTC (0+1 container statuses recorded) Oct 23 01:34:12.345: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 01:34:12.345: INFO: node-exporter-fjc79 started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 01:34:12.345: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 01:34:12.345: INFO: Container node-exporter ready: true, restart count 0 Oct 23 01:34:12.345: INFO: affinity-nodeport-transition-vrh2v started at 2021-10-23 01:33:16 +0000 UTC (0+1 container statuses recorded) Oct 23 01:34:12.345: INFO: Container affinity-nodeport-transition ready: true, restart count 0 Oct 23 01:34:12.345: INFO: affinity-clusterip-transition-p756q started at 2021-10-23 01:33:53 +0000 UTC (0+1 container statuses recorded) Oct 23 01:34:12.345: INFO: Container affinity-clusterip-transition ready: false, restart count 0 Oct 23 01:34:12.345: INFO: downwardapi-volume-69a5ef19-99ed-4aa7-9092-a3e1b918aa46 started at 2021-10-23 01:34:08 +0000 UTC (0+1 container statuses recorded) Oct 23 01:34:12.345: INFO: Container client-container ready: false, restart count 0 Oct 23 01:34:12.345: INFO: kube-flannel-xx6ls started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 01:34:12.345: INFO: Init container install-cni ready: true, restart count 1 Oct 23 01:34:12.345: INFO: Container kube-flannel ready: true, restart count 2 Oct 23 01:34:12.345: INFO: tas-telemetry-aware-scheduling-84ff454dfb-gltgg started at 2021-10-22 21:22:32 +0000 UTC (0+1 container statuses recorded) Oct 23 01:34:12.345: INFO: Container tas-extender ready: true, restart count 0 Oct 23 01:34:12.345: INFO: affinity-clusterip-timeout-9fwkc started at 2021-10-23 01:32:46 +0000 UTC (0+1 container statuses recorded) Oct 23 01:34:12.345: INFO: Container affinity-clusterip-timeout ready: true, restart count 0 Oct 23 01:34:12.345: INFO: collectd-xhdgw started at 2021-10-22 21:23:20 +0000 UTC (0+3 container statuses recorded) Oct 23 01:34:12.345: INFO: Container collectd ready: true, restart count 0 Oct 23 01:34:12.345: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 01:34:12.345: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 01:34:12.345: INFO: sample-webhook-deployment-78988fc6cd-t6hgd started at 2021-10-23 01:34:04 +0000 UTC (0+1 container statuses recorded) Oct 23 01:34:12.345: INFO: Container sample-webhook ready: true, restart count 0 Oct 23 01:34:12.345: INFO: nginx-proxy-node2 started at 2021-10-22 21:05:23 +0000 UTC (0+1 container statuses recorded) Oct 23 01:34:12.345: INFO: Container nginx-proxy ready: true, restart count 2 W1023 01:34:12.359133 28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 01:34:12.579: INFO: Latency metrics for node node2 Oct 23 01:34:12.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7548" for this suite. • Failure [305.275 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 Should recreate evicted statefulset [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 01:34:11.371: Pod ss-0 expected to be re-created at least once /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 ------------------------------ {"msg":"FAILED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":-1,"completed":4,"skipped":80,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSS ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":-1,"completed":10,"skipped":96,"failed":0} [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:34:08.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Oct 23 01:34:08.743: INFO: Waiting up to 5m0s for pod "downwardapi-volume-69a5ef19-99ed-4aa7-9092-a3e1b918aa46" in namespace "downward-api-5511" to be "Succeeded or Failed" Oct 23 01:34:08.745: INFO: Pod "downwardapi-volume-69a5ef19-99ed-4aa7-9092-a3e1b918aa46": Phase="Pending", Reason="", readiness=false. Elapsed: 2.39194ms Oct 23 01:34:10.750: INFO: Pod "downwardapi-volume-69a5ef19-99ed-4aa7-9092-a3e1b918aa46": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007581309s Oct 23 01:34:12.754: INFO: Pod "downwardapi-volume-69a5ef19-99ed-4aa7-9092-a3e1b918aa46": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011654143s STEP: Saw pod success Oct 23 01:34:12.754: INFO: Pod "downwardapi-volume-69a5ef19-99ed-4aa7-9092-a3e1b918aa46" satisfied condition "Succeeded or Failed" Oct 23 01:34:12.757: INFO: Trying to get logs from node node2 pod downwardapi-volume-69a5ef19-99ed-4aa7-9092-a3e1b918aa46 container client-container: STEP: delete the pod Oct 23 01:34:12.770: INFO: Waiting for pod downwardapi-volume-69a5ef19-99ed-4aa7-9092-a3e1b918aa46 to disappear Oct 23 01:34:12.772: INFO: Pod downwardapi-volume-69a5ef19-99ed-4aa7-9092-a3e1b918aa46 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:34:12.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5511" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":96,"failed":0} SSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:34:12.789: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should find a service from listing all namespaces [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: fetching services [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:34:12.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5704" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • ------------------------------ {"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":-1,"completed":12,"skipped":99,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:34:12.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [BeforeEach] Listing PodDisruptionBudgets for all namespaces /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:34:12.626: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption-2 STEP: Waiting for a default service account to be provisioned in namespace [It] should list and delete a collection of PodDisruptionBudgets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Waiting for the pdb to be processed STEP: Waiting for the pdb to be processed STEP: Waiting for the pdb to be processed STEP: listing a collection of PDBs across all namespaces STEP: listing a collection of PDBs in namespace disruption-339 STEP: deleting a collection of PDBs STEP: Waiting for the PDB collection to be deleted [AfterEach] Listing PodDisruptionBudgets for all namespaces /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:34:16.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-2-5124" for this suite. [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:34:16.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-339" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]","total":-1,"completed":5,"skipped":86,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:34:06.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Oct 23 01:34:06.848: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Oct 23 01:34:08.858: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549646, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549646, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549646, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549646, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 23 01:34:11.870: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 01:34:11.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:34:19.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-4592" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:13.631 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":-1,"completed":24,"skipped":476,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:34:04.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 23 01:34:04.927: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created Oct 23 01:34:06.935: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549644, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549644, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549644, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549644, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 23 01:34:09.947: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:34:20.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7619" for this suite. STEP: Destroying namespace "webhook-7619-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.804 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":-1,"completed":19,"skipped":327,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:33:23.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Oct 23 01:33:23.287: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5694 5d0ece56-9f5f-47ef-a222-983e67e4bf66 92773 0 2021-10-23 01:33:23 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-10-23 01:33:23 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Oct 23 01:33:23.287: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5694 5d0ece56-9f5f-47ef-a222-983e67e4bf66 92773 0 2021-10-23 01:33:23 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-10-23 01:33:23 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Oct 23 01:33:33.294: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5694 5d0ece56-9f5f-47ef-a222-983e67e4bf66 92924 0 2021-10-23 01:33:23 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-10-23 01:33:33 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Oct 23 01:33:33.294: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5694 5d0ece56-9f5f-47ef-a222-983e67e4bf66 92924 0 2021-10-23 01:33:23 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-10-23 01:33:33 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Oct 23 01:33:43.301: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5694 5d0ece56-9f5f-47ef-a222-983e67e4bf66 93082 0 2021-10-23 01:33:23 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-10-23 01:33:33 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Oct 23 01:33:43.301: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5694 5d0ece56-9f5f-47ef-a222-983e67e4bf66 93082 0 2021-10-23 01:33:23 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-10-23 01:33:33 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Oct 23 01:33:53.307: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5694 5d0ece56-9f5f-47ef-a222-983e67e4bf66 93272 0 2021-10-23 01:33:23 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-10-23 01:33:33 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Oct 23 01:33:53.307: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5694 5d0ece56-9f5f-47ef-a222-983e67e4bf66 93272 0 2021-10-23 01:33:23 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-10-23 01:33:33 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Oct 23 01:34:03.313: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-5694 3ea2d8a4-783b-449a-8481-a1328c6cdc5e 93481 0 2021-10-23 01:34:03 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-10-23 01:34:03 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Oct 23 01:34:03.313: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-5694 3ea2d8a4-783b-449a-8481-a1328c6cdc5e 93481 0 2021-10-23 01:34:03 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-10-23 01:34:03 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Oct 23 01:34:13.318: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-5694 3ea2d8a4-783b-449a-8481-a1328c6cdc5e 93735 0 2021-10-23 01:34:03 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-10-23 01:34:03 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Oct 23 01:34:13.318: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-5694 3ea2d8a4-783b-449a-8481-a1328c6cdc5e 93735 0 2021-10-23 01:34:03 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-10-23 01:34:03 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:34:23.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5694" for this suite. • [SLOW TEST:60.069 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":-1,"completed":14,"skipped":134,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:33:53.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-971 STEP: creating service affinity-clusterip-transition in namespace services-971 STEP: creating replication controller affinity-clusterip-transition in namespace services-971 I1023 01:33:53.926306 30 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-971, replica count: 3 I1023 01:33:56.977278 30 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1023 01:33:59.978275 30 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 23 01:33:59.983: INFO: Creating new exec pod Oct 23 01:34:07.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-971 exec execpod-affinity7mbrv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Oct 23 01:34:07.270: INFO: stderr: "+ nc -v -t -w 2 affinity-clusterip-transition 80\n+ echo hostName\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" Oct 23 01:34:07.270: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Oct 23 01:34:07.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-971 exec execpod-affinity7mbrv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.14.146 80' Oct 23 01:34:07.501: INFO: stderr: "+ nc -v -t -w 2 10.233.14.146 80\n+ echo hostName\nConnection to 10.233.14.146 80 port [tcp/http] succeeded!\n" Oct 23 01:34:07.501: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Oct 23 01:34:07.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-971 exec execpod-affinity7mbrv -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.233.14.146:80/ ; done' Oct 23 01:34:07.812: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.14.146:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.14.146:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.14.146:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.14.146:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.14.146:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.14.146:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.14.146:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.14.146:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.14.146:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.14.146:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.14.146:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.14.146:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.14.146:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.14.146:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.14.146:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.14.146:80/\n" Oct 23 01:34:07.812: INFO: stdout: "\naffinity-clusterip-transition-xfj5l\naffinity-clusterip-transition-p756q\naffinity-clusterip-transition-p756q\naffinity-clusterip-transition-p756q\naffinity-clusterip-transition-p756q\naffinity-clusterip-transition-xfj5l\naffinity-clusterip-transition-p756q\naffinity-clusterip-transition-xfj5l\naffinity-clusterip-transition-x558j\naffinity-clusterip-transition-xfj5l\naffinity-clusterip-transition-x558j\naffinity-clusterip-transition-xfj5l\naffinity-clusterip-transition-p756q\naffinity-clusterip-transition-x558j\naffinity-clusterip-transition-p756q\naffinity-clusterip-transition-x558j" Oct 23 01:34:07.812: INFO: Received response from host: affinity-clusterip-transition-xfj5l Oct 23 01:34:07.812: INFO: Received response from host: affinity-clusterip-transition-p756q Oct 23 01:34:07.812: INFO: Received response from host: affinity-clusterip-transition-p756q Oct 23 01:34:07.812: INFO: Received response from host: affinity-clusterip-transition-p756q Oct 23 01:34:07.812: INFO: Received response from host: affinity-clusterip-transition-p756q Oct 23 01:34:07.812: INFO: Received response from host: affinity-clusterip-transition-xfj5l Oct 23 01:34:07.813: INFO: Received response from host: affinity-clusterip-transition-p756q Oct 23 01:34:07.813: INFO: Received response from host: affinity-clusterip-transition-xfj5l Oct 23 01:34:07.813: INFO: Received response from host: affinity-clusterip-transition-x558j Oct 23 01:34:07.813: INFO: Received response from host: affinity-clusterip-transition-xfj5l Oct 23 01:34:07.813: INFO: Received response from host: affinity-clusterip-transition-x558j Oct 23 01:34:07.813: INFO: Received response from host: affinity-clusterip-transition-xfj5l Oct 23 01:34:07.813: INFO: Received response from host: affinity-clusterip-transition-p756q Oct 23 01:34:07.813: INFO: Received response from host: affinity-clusterip-transition-x558j Oct 23 01:34:07.813: INFO: Received response from host: affinity-clusterip-transition-p756q Oct 23 01:34:07.813: INFO: Received response from host: affinity-clusterip-transition-x558j Oct 23 01:34:07.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-971 exec execpod-affinity7mbrv -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.233.14.146:80/ ; done' Oct 23 01:34:08.126: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.14.146:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.14.146:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.14.146:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.14.146:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.14.146:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.14.146:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.14.146:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.14.146:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.14.146:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.14.146:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.14.146:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.14.146:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.14.146:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.14.146:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.14.146:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.14.146:80/\n" Oct 23 01:34:08.126: INFO: stdout: "\naffinity-clusterip-transition-p756q\naffinity-clusterip-transition-p756q\naffinity-clusterip-transition-p756q\naffinity-clusterip-transition-p756q\naffinity-clusterip-transition-p756q\naffinity-clusterip-transition-p756q\naffinity-clusterip-transition-p756q\naffinity-clusterip-transition-p756q\naffinity-clusterip-transition-p756q\naffinity-clusterip-transition-p756q\naffinity-clusterip-transition-p756q\naffinity-clusterip-transition-p756q\naffinity-clusterip-transition-p756q\naffinity-clusterip-transition-p756q\naffinity-clusterip-transition-p756q\naffinity-clusterip-transition-p756q" Oct 23 01:34:08.126: INFO: Received response from host: affinity-clusterip-transition-p756q Oct 23 01:34:08.126: INFO: Received response from host: affinity-clusterip-transition-p756q Oct 23 01:34:08.126: INFO: Received response from host: affinity-clusterip-transition-p756q Oct 23 01:34:08.126: INFO: Received response from host: affinity-clusterip-transition-p756q Oct 23 01:34:08.126: INFO: Received response from host: affinity-clusterip-transition-p756q Oct 23 01:34:08.126: INFO: Received response from host: affinity-clusterip-transition-p756q Oct 23 01:34:08.126: INFO: Received response from host: affinity-clusterip-transition-p756q Oct 23 01:34:08.126: INFO: Received response from host: affinity-clusterip-transition-p756q Oct 23 01:34:08.126: INFO: Received response from host: affinity-clusterip-transition-p756q Oct 23 01:34:08.126: INFO: Received response from host: affinity-clusterip-transition-p756q Oct 23 01:34:08.126: INFO: Received response from host: affinity-clusterip-transition-p756q Oct 23 01:34:08.126: INFO: Received response from host: affinity-clusterip-transition-p756q Oct 23 01:34:08.126: INFO: Received response from host: affinity-clusterip-transition-p756q Oct 23 01:34:08.126: INFO: Received response from host: affinity-clusterip-transition-p756q Oct 23 01:34:08.126: INFO: Received response from host: affinity-clusterip-transition-p756q Oct 23 01:34:08.126: INFO: Received response from host: affinity-clusterip-transition-p756q Oct 23 01:34:08.126: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-971, will wait for the garbage collector to delete the pods Oct 23 01:34:08.190: INFO: Deleting ReplicationController affinity-clusterip-transition took: 4.190215ms Oct 23 01:34:08.290: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 100.880268ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:34:24.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-971" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:30.612 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":9,"skipped":130,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:34:20.050: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Oct 23 01:34:20.121: INFO: Waiting up to 5m0s for pod "security-context-89572c67-5c3d-48ff-a5db-d24f204b7ea1" in namespace "security-context-8545" to be "Succeeded or Failed" Oct 23 01:34:20.123: INFO: Pod "security-context-89572c67-5c3d-48ff-a5db-d24f204b7ea1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.210478ms Oct 23 01:34:22.127: INFO: Pod "security-context-89572c67-5c3d-48ff-a5db-d24f204b7ea1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005777288s Oct 23 01:34:24.130: INFO: Pod "security-context-89572c67-5c3d-48ff-a5db-d24f204b7ea1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008951578s Oct 23 01:34:26.135: INFO: Pod "security-context-89572c67-5c3d-48ff-a5db-d24f204b7ea1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.0147121s Oct 23 01:34:28.139: INFO: Pod "security-context-89572c67-5c3d-48ff-a5db-d24f204b7ea1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.018691695s STEP: Saw pod success Oct 23 01:34:28.140: INFO: Pod "security-context-89572c67-5c3d-48ff-a5db-d24f204b7ea1" satisfied condition "Succeeded or Failed" Oct 23 01:34:28.142: INFO: Trying to get logs from node node2 pod security-context-89572c67-5c3d-48ff-a5db-d24f204b7ea1 container test-container: STEP: delete the pod Oct 23 01:34:28.159: INFO: Waiting for pod security-context-89572c67-5c3d-48ff-a5db-d24f204b7ea1 to disappear Oct 23 01:34:28.161: INFO: Pod security-context-89572c67-5c3d-48ff-a5db-d24f204b7ea1 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:34:28.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-8545" for this suite. • [SLOW TEST:8.119 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":25,"skipped":516,"failed":0} S ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:34:20.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test substitution in container's command Oct 23 01:34:20.140: INFO: Waiting up to 5m0s for pod "var-expansion-2ac04d78-e41c-4fb3-98e0-e944e6b90b51" in namespace "var-expansion-191" to be "Succeeded or Failed" Oct 23 01:34:20.143: INFO: Pod "var-expansion-2ac04d78-e41c-4fb3-98e0-e944e6b90b51": Phase="Pending", Reason="", readiness=false. Elapsed: 2.454572ms Oct 23 01:34:22.148: INFO: Pod "var-expansion-2ac04d78-e41c-4fb3-98e0-e944e6b90b51": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007505136s Oct 23 01:34:24.152: INFO: Pod "var-expansion-2ac04d78-e41c-4fb3-98e0-e944e6b90b51": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0113353s Oct 23 01:34:26.156: INFO: Pod "var-expansion-2ac04d78-e41c-4fb3-98e0-e944e6b90b51": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015379263s Oct 23 01:34:28.158: INFO: Pod "var-expansion-2ac04d78-e41c-4fb3-98e0-e944e6b90b51": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.017739283s STEP: Saw pod success Oct 23 01:34:28.158: INFO: Pod "var-expansion-2ac04d78-e41c-4fb3-98e0-e944e6b90b51" satisfied condition "Succeeded or Failed" Oct 23 01:34:28.160: INFO: Trying to get logs from node node2 pod var-expansion-2ac04d78-e41c-4fb3-98e0-e944e6b90b51 container dapi-container: STEP: delete the pod Oct 23 01:34:28.176: INFO: Waiting for pod var-expansion-2ac04d78-e41c-4fb3-98e0-e944e6b90b51 to disappear Oct 23 01:34:28.178: INFO: Pod var-expansion-2ac04d78-e41c-4fb3-98e0-e944e6b90b51 no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:34:28.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-191" for this suite. • [SLOW TEST:8.076 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":342,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:34:23.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 01:34:23.462: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"ed958547-f68d-4add-9454-74b33f4ccaa7", Controller:(*bool)(0xc0058d225a), BlockOwnerDeletion:(*bool)(0xc0058d225b)}} Oct 23 01:34:23.466: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"3d971a3c-7f09-4040-96a8-3df65bda82c8", Controller:(*bool)(0xc0035ed83a), BlockOwnerDeletion:(*bool)(0xc0035ed83b)}} Oct 23 01:34:23.473: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"99db5203-aad5-475f-96e7-f42e38447168", Controller:(*bool)(0xc00041efaa), BlockOwnerDeletion:(*bool)(0xc00041efab)}} [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:34:28.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4156" for this suite. • [SLOW TEST:5.087 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":-1,"completed":15,"skipped":176,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:34:28.570: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of pod templates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create set of pod templates Oct 23 01:34:28.596: INFO: created test-podtemplate-1 Oct 23 01:34:28.599: INFO: created test-podtemplate-2 Oct 23 01:34:28.603: INFO: created test-podtemplate-3 STEP: get a list of pod templates with a label in the current namespace STEP: delete collection of pod templates Oct 23 01:34:28.605: INFO: requesting DeleteCollection of pod templates STEP: check that the list of pod templates matches the requested quantity Oct 23 01:34:28.614: INFO: requesting list of pod templates to confirm quantity [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:34:28.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-3307" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":-1,"completed":16,"skipped":222,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:34:24.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Given a Pod with a 'name' label pod-adoption-release is created Oct 23 01:34:24.566: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:34:26.570: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:34:28.569: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:34:30.570: INFO: The status of Pod pod-adoption-release is Running (Ready = true) STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Oct 23 01:34:31.587: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:34:32.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-7137" for this suite. • [SLOW TEST:8.082 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":-1,"completed":10,"skipped":140,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:34:12.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1548 [It] should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Oct 23 01:34:12.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-687 run e2e-test-httpd-pod --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 --labels=run=e2e-test-httpd-pod' Oct 23 01:34:13.051: INFO: stderr: "" Oct 23 01:34:13.051: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Oct 23 01:34:18.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-687 get pod e2e-test-httpd-pod -o json' Oct 23 01:34:18.264: INFO: stderr: "" Oct 23 01:34:18.264: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"annotations\": {\n \"k8s.v1.cni.cncf.io/network-status\": \"[{\\n \\\"name\\\": \\\"default-cni-network\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.244.4.88\\\"\\n ],\\n \\\"mac\\\": \\\"b6:97:91:15:c4:45\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\n \"k8s.v1.cni.cncf.io/networks-status\": \"[{\\n \\\"name\\\": \\\"default-cni-network\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.244.4.88\\\"\\n ],\\n \\\"mac\\\": \\\"b6:97:91:15:c4:45\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\n \"kubernetes.io/psp\": \"collectd\"\n },\n \"creationTimestamp\": \"2021-10-23T01:34:13Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-687\",\n \"resourceVersion\": \"93829\",\n \"uid\": \"5a66bba9-b36e-4300-9cbb-aa079abd48a0\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\",\n \"imagePullPolicy\": \"Always\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"kube-api-access-8wgjh\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"node2\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"kube-api-access-8wgjh\",\n \"projected\": {\n \"defaultMode\": 420,\n \"sources\": [\n {\n \"serviceAccountToken\": {\n \"expirationSeconds\": 3607,\n \"path\": \"token\"\n }\n },\n {\n \"configMap\": {\n \"items\": [\n {\n \"key\": \"ca.crt\",\n \"path\": \"ca.crt\"\n }\n ],\n \"name\": \"kube-root-ca.crt\"\n }\n },\n {\n \"downwardAPI\": {\n \"items\": [\n {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"metadata.namespace\"\n },\n \"path\": \"namespace\"\n }\n ]\n }\n }\n ]\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-10-23T01:34:13Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-10-23T01:34:15Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-10-23T01:34:15Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-10-23T01:34:13Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"docker://e76d3b48a397cede08d35cc802a79cc07f805771728b9155d338fdfa17fb5c58\",\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\",\n \"imageID\": \"docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2021-10-23T01:34:15Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.10.190.208\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.4.88\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.4.88\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2021-10-23T01:34:13Z\"\n }\n}\n" STEP: replace the image in the pod Oct 23 01:34:18.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-687 replace -f -' Oct 23 01:34:18.648: INFO: stderr: "" Oct 23 01:34:18.648: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image k8s.gcr.io/e2e-test-images/busybox:1.29-1 [AfterEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1552 Oct 23 01:34:18.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-687 delete pods e2e-test-httpd-pod' Oct 23 01:34:34.246: INFO: stderr: "" Oct 23 01:34:34.246: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:34:34.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-687" for this suite. • [SLOW TEST:21.371 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1545 should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":-1,"completed":13,"skipped":127,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:32:42.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-1438 Oct 23 01:32:42.302: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:32:44.306: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:32:46.307: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) Oct 23 01:32:46.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1438 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Oct 23 01:32:46.613: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" Oct 23 01:32:46.613: INFO: stdout: "iptables" Oct 23 01:32:46.613: INFO: proxyMode: iptables Oct 23 01:32:46.621: INFO: Waiting for pod kube-proxy-mode-detector to disappear Oct 23 01:32:46.623: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-clusterip-timeout in namespace services-1438 STEP: creating replication controller affinity-clusterip-timeout in namespace services-1438 I1023 01:32:46.633522 35 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-1438, replica count: 3 I1023 01:32:49.684938 35 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1023 01:32:52.685653 35 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 23 01:32:52.691: INFO: Creating new exec pod Oct 23 01:32:57.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1438 exec execpod-affinitydx6vj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80' Oct 23 01:32:58.072: INFO: stderr: "+ nc -v -t -w 2 affinity-clusterip-timeout 80\n+ echo hostName\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\n" Oct 23 01:32:58.072: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Oct 23 01:32:58.072: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1438 exec execpod-affinitydx6vj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.15.71 80' Oct 23 01:32:58.361: INFO: stderr: "+ nc -v -t -w 2 10.233.15.71 80\n+ echo hostName\nConnection to 10.233.15.71 80 port [tcp/http] succeeded!\n" Oct 23 01:32:58.361: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Oct 23 01:32:58.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1438 exec execpod-affinitydx6vj -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.233.15.71:80/ ; done' Oct 23 01:32:58.656: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.15.71:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.15.71:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.15.71:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.15.71:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.15.71:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.15.71:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.15.71:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.15.71:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.15.71:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.15.71:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.15.71:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.15.71:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.15.71:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.15.71:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.15.71:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.15.71:80/\n" Oct 23 01:32:58.656: INFO: stdout: "\naffinity-clusterip-timeout-2s4ln\naffinity-clusterip-timeout-2s4ln\naffinity-clusterip-timeout-2s4ln\naffinity-clusterip-timeout-2s4ln\naffinity-clusterip-timeout-2s4ln\naffinity-clusterip-timeout-2s4ln\naffinity-clusterip-timeout-2s4ln\naffinity-clusterip-timeout-2s4ln\naffinity-clusterip-timeout-2s4ln\naffinity-clusterip-timeout-2s4ln\naffinity-clusterip-timeout-2s4ln\naffinity-clusterip-timeout-2s4ln\naffinity-clusterip-timeout-2s4ln\naffinity-clusterip-timeout-2s4ln\naffinity-clusterip-timeout-2s4ln\naffinity-clusterip-timeout-2s4ln" Oct 23 01:32:58.656: INFO: Received response from host: affinity-clusterip-timeout-2s4ln Oct 23 01:32:58.656: INFO: Received response from host: affinity-clusterip-timeout-2s4ln Oct 23 01:32:58.656: INFO: Received response from host: affinity-clusterip-timeout-2s4ln Oct 23 01:32:58.656: INFO: Received response from host: affinity-clusterip-timeout-2s4ln Oct 23 01:32:58.656: INFO: Received response from host: affinity-clusterip-timeout-2s4ln Oct 23 01:32:58.656: INFO: Received response from host: affinity-clusterip-timeout-2s4ln Oct 23 01:32:58.656: INFO: Received response from host: affinity-clusterip-timeout-2s4ln Oct 23 01:32:58.656: INFO: Received response from host: affinity-clusterip-timeout-2s4ln Oct 23 01:32:58.656: INFO: Received response from host: affinity-clusterip-timeout-2s4ln Oct 23 01:32:58.656: INFO: Received response from host: affinity-clusterip-timeout-2s4ln Oct 23 01:32:58.656: INFO: Received response from host: affinity-clusterip-timeout-2s4ln Oct 23 01:32:58.656: INFO: Received response from host: affinity-clusterip-timeout-2s4ln Oct 23 01:32:58.656: INFO: Received response from host: affinity-clusterip-timeout-2s4ln Oct 23 01:32:58.656: INFO: Received response from host: affinity-clusterip-timeout-2s4ln Oct 23 01:32:58.656: INFO: Received response from host: affinity-clusterip-timeout-2s4ln Oct 23 01:32:58.656: INFO: Received response from host: affinity-clusterip-timeout-2s4ln Oct 23 01:32:58.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1438 exec execpod-affinitydx6vj -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.233.15.71:80/' Oct 23 01:32:58.901: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.233.15.71:80/\n" Oct 23 01:32:58.901: INFO: stdout: "affinity-clusterip-timeout-2s4ln" Oct 23 01:33:18.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1438 exec execpod-affinitydx6vj -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.233.15.71:80/' Oct 23 01:33:20.307: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.233.15.71:80/\n" Oct 23 01:33:20.307: INFO: stdout: "affinity-clusterip-timeout-2s4ln" Oct 23 01:33:40.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1438 exec execpod-affinitydx6vj -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.233.15.71:80/' Oct 23 01:33:40.589: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.233.15.71:80/\n" Oct 23 01:33:40.589: INFO: stdout: "affinity-clusterip-timeout-2s4ln" Oct 23 01:34:00.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1438 exec execpod-affinitydx6vj -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.233.15.71:80/' Oct 23 01:34:00.880: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.233.15.71:80/\n" Oct 23 01:34:00.880: INFO: stdout: "affinity-clusterip-timeout-2s4ln" Oct 23 01:34:20.881: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1438 exec execpod-affinitydx6vj -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.233.15.71:80/' Oct 23 01:34:21.203: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.233.15.71:80/\n" Oct 23 01:34:21.203: INFO: stdout: "affinity-clusterip-timeout-vtrdv" Oct 23 01:34:21.203: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-1438, will wait for the garbage collector to delete the pods Oct 23 01:34:21.269: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 3.846174ms Oct 23 01:34:21.369: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 100.260077ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:34:34.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1438" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:112.320 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":16,"skipped":417,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:34:28.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0777 on tmpfs Oct 23 01:34:28.762: INFO: Waiting up to 5m0s for pod "pod-243b305a-49fd-4599-a830-57eac937f555" in namespace "emptydir-85" to be "Succeeded or Failed" Oct 23 01:34:28.764: INFO: Pod "pod-243b305a-49fd-4599-a830-57eac937f555": Phase="Pending", Reason="", readiness=false. Elapsed: 2.133577ms Oct 23 01:34:30.769: INFO: Pod "pod-243b305a-49fd-4599-a830-57eac937f555": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006801396s Oct 23 01:34:32.774: INFO: Pod "pod-243b305a-49fd-4599-a830-57eac937f555": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011933868s Oct 23 01:34:34.778: INFO: Pod "pod-243b305a-49fd-4599-a830-57eac937f555": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015843191s STEP: Saw pod success Oct 23 01:34:34.778: INFO: Pod "pod-243b305a-49fd-4599-a830-57eac937f555" satisfied condition "Succeeded or Failed" Oct 23 01:34:34.780: INFO: Trying to get logs from node node2 pod pod-243b305a-49fd-4599-a830-57eac937f555 container test-container: STEP: delete the pod Oct 23 01:34:34.795: INFO: Waiting for pod pod-243b305a-49fd-4599-a830-57eac937f555 to disappear Oct 23 01:34:34.797: INFO: Pod pod-243b305a-49fd-4599-a830-57eac937f555 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:34:34.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-85" for this suite. • [SLOW TEST:6.076 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":284,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:34:28.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Oct 23 01:34:32.224: INFO: &Pod{ObjectMeta:{send-events-36a0fd4d-3b76-44fe-a9fa-e6de8b0768b7 events-4333 b9d66d92-2ea3-447d-bee5-2fe6565f4c5b 94260 0 2021-10-23 01:34:28 +0000 UTC map[name:foo time:198778469] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.92" ], "mac": "f2:8b:86:12:9b:eb", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.92" ], "mac": "f2:8b:86:12:9b:eb", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [] [] [{e2e.test Update v1 2021-10-23 01:34:28 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-10-23 01:34:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-10-23 01:34:31 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.92\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-pqxp7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:p,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pqxp7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:34:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:34:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:34:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:34:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.92,StartTime:2021-10-23 01:34:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-23 01:34:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:docker://571df8cd637f134bb83d03f2aa1858aa36037b18f10a244ae882691563f7ea14,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.92,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Oct 23 01:34:34.228: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Oct 23 01:34:36.232: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:34:36.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-4333" for this suite. • [SLOW TEST:8.085 seconds] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:31:57.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod with failed condition STEP: updating the pod Oct 23 01:33:57.738: INFO: Successfully updated pod "var-expansion-9cb674eb-d5dc-4f84-9ce2-f7a33e33d32a" STEP: waiting for pod running STEP: deleting the pod gracefully Oct 23 01:33:59.746: INFO: Deleting pod "var-expansion-9cb674eb-d5dc-4f84-9ce2-f7a33e33d32a" in namespace "var-expansion-8379" Oct 23 01:33:59.750: INFO: Wait up to 5m0s for pod "var-expansion-9cb674eb-d5dc-4f84-9ce2-f7a33e33d32a" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:34:37.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8379" for this suite. • [SLOW TEST:160.581 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]","total":-1,"completed":16,"skipped":280,"failed":0} SSSSSSS ------------------------------ [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:34:34.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 01:34:34.361: INFO: Creating pod... Oct 23 01:34:34.376: INFO: Pod Quantity: 1 Status: Pending Oct 23 01:34:35.380: INFO: Pod Quantity: 1 Status: Pending Oct 23 01:34:36.379: INFO: Pod Quantity: 1 Status: Pending Oct 23 01:34:37.379: INFO: Pod Quantity: 1 Status: Pending Oct 23 01:34:38.379: INFO: Pod Quantity: 1 Status: Pending Oct 23 01:34:39.380: INFO: Pod Status: Running Oct 23 01:34:39.380: INFO: Creating service... Oct 23 01:34:39.387: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-1889/pods/agnhost/proxy/some/path/with/DELETE Oct 23 01:34:39.390: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE Oct 23 01:34:39.390: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-1889/pods/agnhost/proxy/some/path/with/GET Oct 23 01:34:39.393: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET Oct 23 01:34:39.393: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-1889/pods/agnhost/proxy/some/path/with/HEAD Oct 23 01:34:39.395: INFO: http.Client request:HEAD | StatusCode:200 Oct 23 01:34:39.395: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-1889/pods/agnhost/proxy/some/path/with/OPTIONS Oct 23 01:34:39.397: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS Oct 23 01:34:39.397: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-1889/pods/agnhost/proxy/some/path/with/PATCH Oct 23 01:34:39.400: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH Oct 23 01:34:39.400: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-1889/pods/agnhost/proxy/some/path/with/POST Oct 23 01:34:39.402: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST Oct 23 01:34:39.402: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-1889/pods/agnhost/proxy/some/path/with/PUT Oct 23 01:34:39.405: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT Oct 23 01:34:39.405: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-1889/services/test-service/proxy/some/path/with/DELETE Oct 23 01:34:39.407: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE Oct 23 01:34:39.407: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-1889/services/test-service/proxy/some/path/with/GET Oct 23 01:34:39.410: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET Oct 23 01:34:39.410: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-1889/services/test-service/proxy/some/path/with/HEAD Oct 23 01:34:39.413: INFO: http.Client request:HEAD | StatusCode:200 Oct 23 01:34:39.413: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-1889/services/test-service/proxy/some/path/with/OPTIONS Oct 23 01:34:39.416: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS Oct 23 01:34:39.416: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-1889/services/test-service/proxy/some/path/with/PATCH Oct 23 01:34:39.419: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH Oct 23 01:34:39.419: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-1889/services/test-service/proxy/some/path/with/POST Oct 23 01:34:39.422: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST Oct 23 01:34:39.422: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-1889/services/test-service/proxy/some/path/with/PUT Oct 23 01:34:39.425: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT [AfterEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:34:39.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-1889" for this suite. • [SLOW TEST:5.094 seconds] [sig-network] Proxy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:74 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]","total":-1,"completed":14,"skipped":173,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:34:28.204: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should observe PodDisruptionBudget status updated [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Waiting for the pdb to be processed STEP: Waiting for all pods to be running Oct 23 01:34:30.269: INFO: running pods: 0 < 3 Oct 23 01:34:32.289: INFO: running pods: 0 < 3 Oct 23 01:34:34.272: INFO: running pods: 1 < 3 Oct 23 01:34:36.275: INFO: running pods: 1 < 3 Oct 23 01:34:38.273: INFO: running pods: 2 < 3 [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:34:40.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-992" for this suite. • [SLOW TEST:12.081 seconds] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should observe PodDisruptionBudget status updated [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]","total":-1,"completed":21,"skipped":351,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:34:39.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0777 on node default medium Oct 23 01:34:39.480: INFO: Waiting up to 5m0s for pod "pod-4fc1098b-4a30-438d-98b3-d18443482350" in namespace "emptydir-3002" to be "Succeeded or Failed" Oct 23 01:34:39.482: INFO: Pod "pod-4fc1098b-4a30-438d-98b3-d18443482350": Phase="Pending", Reason="", readiness=false. Elapsed: 2.380362ms Oct 23 01:34:41.486: INFO: Pod "pod-4fc1098b-4a30-438d-98b3-d18443482350": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005925729s Oct 23 01:34:43.489: INFO: Pod "pod-4fc1098b-4a30-438d-98b3-d18443482350": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009101993s STEP: Saw pod success Oct 23 01:34:43.489: INFO: Pod "pod-4fc1098b-4a30-438d-98b3-d18443482350" satisfied condition "Succeeded or Failed" Oct 23 01:34:43.492: INFO: Trying to get logs from node node2 pod pod-4fc1098b-4a30-438d-98b3-d18443482350 container test-container: STEP: delete the pod Oct 23 01:34:43.526: INFO: Waiting for pod pod-4fc1098b-4a30-438d-98b3-d18443482350 to disappear Oct 23 01:34:43.528: INFO: Pod pod-4fc1098b-4a30-438d-98b3-d18443482350 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:34:43.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3002" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":179,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:34:43.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-map-40c52e69-6e70-439e-a62b-60ae57ac5d7d STEP: Creating a pod to test consume secrets Oct 23 01:34:43.635: INFO: Waiting up to 5m0s for pod "pod-secrets-2a189c83-cea6-480f-9caa-15ef86f4f310" in namespace "secrets-5776" to be "Succeeded or Failed" Oct 23 01:34:43.639: INFO: Pod "pod-secrets-2a189c83-cea6-480f-9caa-15ef86f4f310": Phase="Pending", Reason="", readiness=false. Elapsed: 3.778593ms Oct 23 01:34:45.643: INFO: Pod "pod-secrets-2a189c83-cea6-480f-9caa-15ef86f4f310": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008173978s Oct 23 01:34:47.648: INFO: Pod "pod-secrets-2a189c83-cea6-480f-9caa-15ef86f4f310": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012548494s Oct 23 01:34:49.652: INFO: Pod "pod-secrets-2a189c83-cea6-480f-9caa-15ef86f4f310": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016284614s Oct 23 01:34:51.657: INFO: Pod "pod-secrets-2a189c83-cea6-480f-9caa-15ef86f4f310": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.022174023s STEP: Saw pod success Oct 23 01:34:51.658: INFO: Pod "pod-secrets-2a189c83-cea6-480f-9caa-15ef86f4f310" satisfied condition "Succeeded or Failed" Oct 23 01:34:51.660: INFO: Trying to get logs from node node2 pod pod-secrets-2a189c83-cea6-480f-9caa-15ef86f4f310 container secret-volume-test: STEP: delete the pod Oct 23 01:34:51.674: INFO: Waiting for pod pod-secrets-2a189c83-cea6-480f-9caa-15ef86f4f310 to disappear Oct 23 01:34:51.676: INFO: Pod pod-secrets-2a189c83-cea6-480f-9caa-15ef86f4f310 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:34:51.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5776" for this suite. • [SLOW TEST:8.087 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":212,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:34:37.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:34:53.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4837" for this suite. • [SLOW TEST:16.061 seconds] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:79 should have an terminated reason [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:34:34.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1514 [It] should create a pod from an image when restart is Never [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Oct 23 01:34:34.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3475 run e2e-test-httpd-pod --restart=Never --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1' Oct 23 01:34:34.778: INFO: stderr: "" Oct 23 01:34:34.778: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1518 Oct 23 01:34:34.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3475 delete pods e2e-test-httpd-pod' Oct 23 01:34:54.201: INFO: stderr: "" Oct 23 01:34:54.201: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:34:54.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3475" for this suite. • [SLOW TEST:19.605 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1511 should create a pod from an image when restart is Never [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":-1,"completed":17,"skipped":424,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":-1,"completed":26,"skipped":517,"failed":0} [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:34:36.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 01:34:42.319: INFO: Deleting pod "var-expansion-a5b05296-5742-4bb8-a863-eab7eb72c209" in namespace "var-expansion-5393" Oct 23 01:34:42.323: INFO: Wait up to 5m0s for pod "var-expansion-a5b05296-5742-4bb8-a863-eab7eb72c209" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:34:54.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5393" for this suite. • [SLOW TEST:18.075 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]","total":-1,"completed":27,"skipped":517,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:34:40.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating Agnhost RC Oct 23 01:34:40.387: INFO: namespace kubectl-2282 Oct 23 01:34:40.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2282 create -f -' Oct 23 01:34:40.789: INFO: stderr: "" Oct 23 01:34:40.789: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Oct 23 01:34:41.793: INFO: Selector matched 1 pods for map[app:agnhost] Oct 23 01:34:41.793: INFO: Found 0 / 1 Oct 23 01:34:42.792: INFO: Selector matched 1 pods for map[app:agnhost] Oct 23 01:34:42.792: INFO: Found 0 / 1 Oct 23 01:34:43.793: INFO: Selector matched 1 pods for map[app:agnhost] Oct 23 01:34:43.793: INFO: Found 0 / 1 Oct 23 01:34:44.792: INFO: Selector matched 1 pods for map[app:agnhost] Oct 23 01:34:44.792: INFO: Found 0 / 1 Oct 23 01:34:45.792: INFO: Selector matched 1 pods for map[app:agnhost] Oct 23 01:34:45.792: INFO: Found 0 / 1 Oct 23 01:34:46.793: INFO: Selector matched 1 pods for map[app:agnhost] Oct 23 01:34:46.793: INFO: Found 0 / 1 Oct 23 01:34:47.792: INFO: Selector matched 1 pods for map[app:agnhost] Oct 23 01:34:47.792: INFO: Found 0 / 1 Oct 23 01:34:48.792: INFO: Selector matched 1 pods for map[app:agnhost] Oct 23 01:34:48.792: INFO: Found 0 / 1 Oct 23 01:34:49.792: INFO: Selector matched 1 pods for map[app:agnhost] Oct 23 01:34:49.792: INFO: Found 0 / 1 Oct 23 01:34:50.795: INFO: Selector matched 1 pods for map[app:agnhost] Oct 23 01:34:50.795: INFO: Found 1 / 1 Oct 23 01:34:50.795: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Oct 23 01:34:50.797: INFO: Selector matched 1 pods for map[app:agnhost] Oct 23 01:34:50.797: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Oct 23 01:34:50.798: INFO: wait on agnhost-primary startup in kubectl-2282 Oct 23 01:34:50.798: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2282 logs agnhost-primary-bd8x7 agnhost-primary' Oct 23 01:34:50.969: INFO: stderr: "" Oct 23 01:34:50.969: INFO: stdout: "Paused\n" STEP: exposing RC Oct 23 01:34:50.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2282 expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379' Oct 23 01:34:51.186: INFO: stderr: "" Oct 23 01:34:51.186: INFO: stdout: "service/rm2 exposed\n" Oct 23 01:34:51.188: INFO: Service rm2 in namespace kubectl-2282 found. STEP: exposing service Oct 23 01:34:53.194: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2282 expose service rm2 --name=rm3 --port=2345 --target-port=6379' Oct 23 01:34:53.386: INFO: stderr: "" Oct 23 01:34:53.386: INFO: stdout: "service/rm3 exposed\n" Oct 23 01:34:53.388: INFO: Service rm3 in namespace kubectl-2282 found. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:34:55.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2282" for this suite. • [SLOW TEST:15.039 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1223 should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":-1,"completed":22,"skipped":388,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:34:51.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Oct 23 01:34:51.759: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3cd3bc3f-b334-4050-837b-f4fd7949445b" in namespace "projected-5178" to be "Succeeded or Failed" Oct 23 01:34:51.762: INFO: Pod "downwardapi-volume-3cd3bc3f-b334-4050-837b-f4fd7949445b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.234746ms Oct 23 01:34:53.767: INFO: Pod "downwardapi-volume-3cd3bc3f-b334-4050-837b-f4fd7949445b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007648536s Oct 23 01:34:55.771: INFO: Pod "downwardapi-volume-3cd3bc3f-b334-4050-837b-f4fd7949445b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011519249s STEP: Saw pod success Oct 23 01:34:55.771: INFO: Pod "downwardapi-volume-3cd3bc3f-b334-4050-837b-f4fd7949445b" satisfied condition "Succeeded or Failed" Oct 23 01:34:55.773: INFO: Trying to get logs from node node2 pod downwardapi-volume-3cd3bc3f-b334-4050-837b-f4fd7949445b container client-container: STEP: delete the pod Oct 23 01:34:55.838: INFO: Waiting for pod downwardapi-volume-3cd3bc3f-b334-4050-837b-f4fd7949445b to disappear Oct 23 01:34:55.840: INFO: Pod downwardapi-volume-3cd3bc3f-b334-4050-837b-f4fd7949445b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:34:55.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5178" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":231,"failed":0} SSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":287,"failed":0} [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:34:53.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting a starting resourceVersion STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:34:58.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3075" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":-1,"completed":18,"skipped":287,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:34:54.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir volume type on node default medium Oct 23 01:34:54.275: INFO: Waiting up to 5m0s for pod "pod-538911cf-97ec-4779-8dfb-a60fad370827" in namespace "emptydir-7455" to be "Succeeded or Failed" Oct 23 01:34:54.278: INFO: Pod "pod-538911cf-97ec-4779-8dfb-a60fad370827": Phase="Pending", Reason="", readiness=false. Elapsed: 2.917249ms Oct 23 01:34:56.282: INFO: Pod "pod-538911cf-97ec-4779-8dfb-a60fad370827": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006388279s Oct 23 01:34:58.286: INFO: Pod "pod-538911cf-97ec-4779-8dfb-a60fad370827": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010097136s STEP: Saw pod success Oct 23 01:34:58.286: INFO: Pod "pod-538911cf-97ec-4779-8dfb-a60fad370827" satisfied condition "Succeeded or Failed" Oct 23 01:34:58.288: INFO: Trying to get logs from node node1 pod pod-538911cf-97ec-4779-8dfb-a60fad370827 container test-container: STEP: delete the pod Oct 23 01:34:58.320: INFO: Waiting for pod pod-538911cf-97ec-4779-8dfb-a60fad370827 to disappear Oct 23 01:34:58.322: INFO: Pod pod-538911cf-97ec-4779-8dfb-a60fad370827 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:34:58.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7455" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":439,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]} SSSS ------------------------------ [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:34:58.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingressclass STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingressclass.go:149 [It] should support creating IngressClass API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Oct 23 01:34:58.381: INFO: starting watch STEP: patching STEP: updating Oct 23 01:34:58.388: INFO: waiting for watch events with expected annotations Oct 23 01:34:58.388: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:34:58.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingressclass-9551" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":-1,"completed":19,"skipped":443,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:34:55.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-cfaab40e-131e-44ca-af4f-425d7f9b06f0 STEP: Creating a pod to test consume secrets Oct 23 01:34:55.918: INFO: Waiting up to 5m0s for pod "pod-secrets-e54e4af8-b336-4b82-9bdb-46a1734362b8" in namespace "secrets-9560" to be "Succeeded or Failed" Oct 23 01:34:55.922: INFO: Pod "pod-secrets-e54e4af8-b336-4b82-9bdb-46a1734362b8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.208353ms Oct 23 01:34:57.925: INFO: Pod "pod-secrets-e54e4af8-b336-4b82-9bdb-46a1734362b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006590868s Oct 23 01:34:59.929: INFO: Pod "pod-secrets-e54e4af8-b336-4b82-9bdb-46a1734362b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010767099s STEP: Saw pod success Oct 23 01:34:59.929: INFO: Pod "pod-secrets-e54e4af8-b336-4b82-9bdb-46a1734362b8" satisfied condition "Succeeded or Failed" Oct 23 01:34:59.932: INFO: Trying to get logs from node node1 pod pod-secrets-e54e4af8-b336-4b82-9bdb-46a1734362b8 container secret-volume-test: STEP: delete the pod Oct 23 01:34:59.997: INFO: Waiting for pod pod-secrets-e54e4af8-b336-4b82-9bdb-46a1734362b8 to disappear Oct 23 01:34:59.999: INFO: Pod pod-secrets-e54e4af8-b336-4b82-9bdb-46a1734362b8 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:35:00.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9560" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":246,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:34:58.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars Oct 23 01:34:58.349: INFO: Waiting up to 5m0s for pod "downward-api-98224445-9325-4148-97a2-1f94927cc115" in namespace "downward-api-8385" to be "Succeeded or Failed" Oct 23 01:34:58.352: INFO: Pod "downward-api-98224445-9325-4148-97a2-1f94927cc115": Phase="Pending", Reason="", readiness=false. Elapsed: 2.555546ms Oct 23 01:35:00.356: INFO: Pod "downward-api-98224445-9325-4148-97a2-1f94927cc115": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006620867s Oct 23 01:35:02.384: INFO: Pod "downward-api-98224445-9325-4148-97a2-1f94927cc115": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034848302s Oct 23 01:35:04.388: INFO: Pod "downward-api-98224445-9325-4148-97a2-1f94927cc115": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.039122642s STEP: Saw pod success Oct 23 01:35:04.388: INFO: Pod "downward-api-98224445-9325-4148-97a2-1f94927cc115" satisfied condition "Succeeded or Failed" Oct 23 01:35:04.391: INFO: Trying to get logs from node node1 pod downward-api-98224445-9325-4148-97a2-1f94927cc115 container dapi-container: STEP: delete the pod Oct 23 01:35:04.403: INFO: Waiting for pod downward-api-98224445-9325-4148-97a2-1f94927cc115 to disappear Oct 23 01:35:04.405: INFO: Pod downward-api-98224445-9325-4148-97a2-1f94927cc115 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:35:04.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8385" for this suite. • [SLOW TEST:6.098 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":293,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:34:55.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 01:34:55.487: INFO: The status of Pod server-envvars-5c26610d-8c34-4cf8-b69a-25443c6ff587 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:34:57.490: INFO: The status of Pod server-envvars-5c26610d-8c34-4cf8-b69a-25443c6ff587 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:34:59.491: INFO: The status of Pod server-envvars-5c26610d-8c34-4cf8-b69a-25443c6ff587 is Running (Ready = true) Oct 23 01:34:59.513: INFO: Waiting up to 5m0s for pod "client-envvars-b9dbb7fe-ac40-4412-8074-c5aacf7afa41" in namespace "pods-8043" to be "Succeeded or Failed" Oct 23 01:34:59.517: INFO: Pod "client-envvars-b9dbb7fe-ac40-4412-8074-c5aacf7afa41": Phase="Pending", Reason="", readiness=false. Elapsed: 4.231265ms Oct 23 01:35:01.520: INFO: Pod "client-envvars-b9dbb7fe-ac40-4412-8074-c5aacf7afa41": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006618788s Oct 23 01:35:03.523: INFO: Pod "client-envvars-b9dbb7fe-ac40-4412-8074-c5aacf7afa41": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009982662s Oct 23 01:35:05.528: INFO: Pod "client-envvars-b9dbb7fe-ac40-4412-8074-c5aacf7afa41": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015417557s STEP: Saw pod success Oct 23 01:35:05.528: INFO: Pod "client-envvars-b9dbb7fe-ac40-4412-8074-c5aacf7afa41" satisfied condition "Succeeded or Failed" Oct 23 01:35:05.531: INFO: Trying to get logs from node node1 pod client-envvars-b9dbb7fe-ac40-4412-8074-c5aacf7afa41 container env3cont: STEP: delete the pod Oct 23 01:35:05.544: INFO: Waiting for pod client-envvars-b9dbb7fe-ac40-4412-8074-c5aacf7afa41 to disappear Oct 23 01:35:05.546: INFO: Pod client-envvars-b9dbb7fe-ac40-4412-8074-c5aacf7afa41 no longer exists [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:35:05.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8043" for this suite. • [SLOW TEST:10.105 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":411,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:35:05.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should support proxy with --port 0 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: starting the proxy server Oct 23 01:35:05.640: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9460 proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:35:05.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9460" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":-1,"completed":24,"skipped":447,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:34:54.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 23 01:34:54.696: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 23 01:34:56.704: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549694, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549694, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549694, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549694, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 23 01:34:59.714: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 01:34:59.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-516-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:35:07.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7723" for this suite. STEP: Destroying namespace "webhook-7723-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:13.482 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":-1,"completed":28,"skipped":537,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:35:05.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Oct 23 01:35:05.782: INFO: Waiting up to 5m0s for pod "downwardapi-volume-95ec2e1a-30f3-4c99-ae2d-7ae9562d0a8e" in namespace "downward-api-7985" to be "Succeeded or Failed" Oct 23 01:35:05.784: INFO: Pod "downwardapi-volume-95ec2e1a-30f3-4c99-ae2d-7ae9562d0a8e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.263465ms Oct 23 01:35:07.788: INFO: Pod "downwardapi-volume-95ec2e1a-30f3-4c99-ae2d-7ae9562d0a8e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005812741s Oct 23 01:35:09.791: INFO: Pod "downwardapi-volume-95ec2e1a-30f3-4c99-ae2d-7ae9562d0a8e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009369401s STEP: Saw pod success Oct 23 01:35:09.791: INFO: Pod "downwardapi-volume-95ec2e1a-30f3-4c99-ae2d-7ae9562d0a8e" satisfied condition "Succeeded or Failed" Oct 23 01:35:09.794: INFO: Trying to get logs from node node2 pod downwardapi-volume-95ec2e1a-30f3-4c99-ae2d-7ae9562d0a8e container client-container: STEP: delete the pod Oct 23 01:35:09.806: INFO: Waiting for pod downwardapi-volume-95ec2e1a-30f3-4c99-ae2d-7ae9562d0a8e to disappear Oct 23 01:35:09.808: INFO: Pod downwardapi-volume-95ec2e1a-30f3-4c99-ae2d-7ae9562d0a8e no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:35:09.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7985" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":449,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:35:00.026: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name cm-test-opt-del-cf1b0d3c-1276-459c-b851-d5ae8c800b10 STEP: Creating configMap with name cm-test-opt-upd-03b123a4-d703-4f11-8bae-ab05eb073b45 STEP: Creating the pod Oct 23 01:35:00.072: INFO: The status of Pod pod-projected-configmaps-499a3835-ab0f-457d-b87c-9014da348282 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:35:02.077: INFO: The status of Pod pod-projected-configmaps-499a3835-ab0f-457d-b87c-9014da348282 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:35:04.076: INFO: The status of Pod pod-projected-configmaps-499a3835-ab0f-457d-b87c-9014da348282 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:35:06.077: INFO: The status of Pod pod-projected-configmaps-499a3835-ab0f-457d-b87c-9014da348282 is Running (Ready = true) STEP: Deleting configmap cm-test-opt-del-cf1b0d3c-1276-459c-b851-d5ae8c800b10 STEP: Updating configmap cm-test-opt-upd-03b123a4-d703-4f11-8bae-ab05eb073b45 STEP: Creating configMap with name cm-test-opt-create-b10c4947-8537-4713-bc61-542ba085b3d4 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:35:10.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2524" for this suite. • [SLOW TEST:10.449 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":254,"failed":0} [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:35:10.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 23 01:35:11.005: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created Oct 23 01:35:13.013: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549711, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549711, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549711, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549711, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 23 01:35:16.022: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:35:16.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8123" for this suite. STEP: Destroying namespace "webhook-8123-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.586 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":-1,"completed":20,"skipped":254,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:35:16.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-map-2520b96c-451d-49ae-b7e5-1f0d38d81937 STEP: Creating a pod to test consume configMaps Oct 23 01:35:16.149: INFO: Waiting up to 5m0s for pod "pod-configmaps-354d9361-3a0c-43b5-a2fb-5fcf51fa4cd9" in namespace "configmap-902" to be "Succeeded or Failed" Oct 23 01:35:16.151: INFO: Pod "pod-configmaps-354d9361-3a0c-43b5-a2fb-5fcf51fa4cd9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.473166ms Oct 23 01:35:18.155: INFO: Pod "pod-configmaps-354d9361-3a0c-43b5-a2fb-5fcf51fa4cd9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006378629s Oct 23 01:35:20.160: INFO: Pod "pod-configmaps-354d9361-3a0c-43b5-a2fb-5fcf51fa4cd9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010530001s STEP: Saw pod success Oct 23 01:35:20.160: INFO: Pod "pod-configmaps-354d9361-3a0c-43b5-a2fb-5fcf51fa4cd9" satisfied condition "Succeeded or Failed" Oct 23 01:35:20.162: INFO: Trying to get logs from node node2 pod pod-configmaps-354d9361-3a0c-43b5-a2fb-5fcf51fa4cd9 container agnhost-container: STEP: delete the pod Oct 23 01:35:20.221: INFO: Waiting for pod pod-configmaps-354d9361-3a0c-43b5-a2fb-5fcf51fa4cd9 to disappear Oct 23 01:35:20.223: INFO: Pod pod-configmaps-354d9361-3a0c-43b5-a2fb-5fcf51fa4cd9 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:35:20.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-902" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":277,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:35:09.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:35:22.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4771" for this suite. • [SLOW TEST:13.109 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":-1,"completed":26,"skipped":483,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:34:58.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-configmap-69xx STEP: Creating a pod to test atomic-volume-subpath Oct 23 01:34:58.496: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-69xx" in namespace "subpath-8789" to be "Succeeded or Failed" Oct 23 01:34:58.498: INFO: Pod "pod-subpath-test-configmap-69xx": Phase="Pending", Reason="", readiness=false. Elapsed: 1.872397ms Oct 23 01:35:00.501: INFO: Pod "pod-subpath-test-configmap-69xx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005002004s Oct 23 01:35:02.505: INFO: Pod "pod-subpath-test-configmap-69xx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008965385s Oct 23 01:35:04.507: INFO: Pod "pod-subpath-test-configmap-69xx": Phase="Running", Reason="", readiness=true. Elapsed: 6.011171429s Oct 23 01:35:06.511: INFO: Pod "pod-subpath-test-configmap-69xx": Phase="Running", Reason="", readiness=true. Elapsed: 8.015000878s Oct 23 01:35:08.514: INFO: Pod "pod-subpath-test-configmap-69xx": Phase="Running", Reason="", readiness=true. Elapsed: 10.018218625s Oct 23 01:35:10.520: INFO: Pod "pod-subpath-test-configmap-69xx": Phase="Running", Reason="", readiness=true. Elapsed: 12.023897174s Oct 23 01:35:12.524: INFO: Pod "pod-subpath-test-configmap-69xx": Phase="Running", Reason="", readiness=true. Elapsed: 14.028000627s Oct 23 01:35:14.528: INFO: Pod "pod-subpath-test-configmap-69xx": Phase="Running", Reason="", readiness=true. Elapsed: 16.031819657s Oct 23 01:35:16.533: INFO: Pod "pod-subpath-test-configmap-69xx": Phase="Running", Reason="", readiness=true. Elapsed: 18.037065669s Oct 23 01:35:18.536: INFO: Pod "pod-subpath-test-configmap-69xx": Phase="Running", Reason="", readiness=true. Elapsed: 20.040265466s Oct 23 01:35:20.539: INFO: Pod "pod-subpath-test-configmap-69xx": Phase="Running", Reason="", readiness=true. Elapsed: 22.043111376s Oct 23 01:35:22.544: INFO: Pod "pod-subpath-test-configmap-69xx": Phase="Running", Reason="", readiness=true. Elapsed: 24.048356773s Oct 23 01:35:24.548: INFO: Pod "pod-subpath-test-configmap-69xx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.05170797s STEP: Saw pod success Oct 23 01:35:24.548: INFO: Pod "pod-subpath-test-configmap-69xx" satisfied condition "Succeeded or Failed" Oct 23 01:35:24.550: INFO: Trying to get logs from node node1 pod pod-subpath-test-configmap-69xx container test-container-subpath-configmap-69xx: STEP: delete the pod Oct 23 01:35:24.561: INFO: Waiting for pod pod-subpath-test-configmap-69xx to disappear Oct 23 01:35:24.563: INFO: Pod pod-subpath-test-configmap-69xx no longer exists STEP: Deleting pod pod-subpath-test-configmap-69xx Oct 23 01:35:24.563: INFO: Deleting pod "pod-subpath-test-configmap-69xx" in namespace "subpath-8789" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:35:24.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8789" for this suite. • [SLOW TEST:26.113 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":-1,"completed":20,"skipped":463,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]} SSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:35:24.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:35:24.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6221" for this suite. • ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:34:16.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 01:34:16.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Oct 23 01:34:24.368: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-10-23T01:34:24Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-10-23T01:34:24Z]] name:name1 resourceVersion:94051 uid:1bae5cac-ff3b-4fea-b694-726df40be8a5] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Oct 23 01:34:34.375: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-10-23T01:34:34Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-10-23T01:34:34Z]] name:name2 resourceVersion:94381 uid:6f3c8a50-a634-459b-bcf1-a09d77701ca8] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Oct 23 01:34:44.382: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-10-23T01:34:24Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-10-23T01:34:44Z]] name:name1 resourceVersion:94683 uid:1bae5cac-ff3b-4fea-b694-726df40be8a5] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Oct 23 01:34:54.386: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-10-23T01:34:34Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-10-23T01:34:54Z]] name:name2 resourceVersion:94891 uid:6f3c8a50-a634-459b-bcf1-a09d77701ca8] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Oct 23 01:35:04.393: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-10-23T01:34:24Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-10-23T01:34:44Z]] name:name1 resourceVersion:95257 uid:1bae5cac-ff3b-4fea-b694-726df40be8a5] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Oct 23 01:35:14.400: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-10-23T01:34:34Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-10-23T01:34:54Z]] name:name2 resourceVersion:95516 uid:6f3c8a50-a634-459b-bcf1-a09d77701ca8] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:35:24.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-5922" for this suite. • [SLOW TEST:68.122 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":-1,"completed":6,"skipped":139,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} S ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:35:20.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 23 01:35:20.623: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 23 01:35:22.631: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549720, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549720, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549720, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549720, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 23 01:35:25.646: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:35:25.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1951" for this suite. STEP: Destroying namespace "webhook-1951-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.528 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":-1,"completed":22,"skipped":281,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:35:23.025: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Oct 23 01:35:28.081: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:35:28.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8130" for this suite. • [SLOW TEST:5.073 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":500,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:35:28.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should guarantee kube-root-ca.crt exist in any namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 01:35:28.147: INFO: Got root ca configmap in namespace "svcaccounts-419" Oct 23 01:35:28.150: INFO: Deleted root ca configmap in namespace "svcaccounts-419" STEP: waiting for a new root ca configmap created Oct 23 01:35:28.653: INFO: Recreated root ca configmap in namespace "svcaccounts-419" Oct 23 01:35:28.657: INFO: Updated root ca configmap in namespace "svcaccounts-419" STEP: waiting for the root ca configmap reconciled Oct 23 01:35:29.161: INFO: Reconciled root ca configmap in namespace "svcaccounts-419" [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:35:29.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-419" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]","total":-1,"completed":28,"skipped":511,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSS ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":-1,"completed":21,"skipped":472,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]} [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:35:24.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount projected service account token [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test service account token: Oct 23 01:35:24.681: INFO: Waiting up to 5m0s for pod "test-pod-9bfd19fd-20f2-4786-90ac-787de7272f8f" in namespace "svcaccounts-9854" to be "Succeeded or Failed" Oct 23 01:35:24.683: INFO: Pod "test-pod-9bfd19fd-20f2-4786-90ac-787de7272f8f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.64318ms Oct 23 01:35:26.687: INFO: Pod "test-pod-9bfd19fd-20f2-4786-90ac-787de7272f8f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006761147s Oct 23 01:35:28.690: INFO: Pod "test-pod-9bfd19fd-20f2-4786-90ac-787de7272f8f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.00993031s Oct 23 01:35:30.694: INFO: Pod "test-pod-9bfd19fd-20f2-4786-90ac-787de7272f8f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013723262s STEP: Saw pod success Oct 23 01:35:30.694: INFO: Pod "test-pod-9bfd19fd-20f2-4786-90ac-787de7272f8f" satisfied condition "Succeeded or Failed" Oct 23 01:35:30.697: INFO: Trying to get logs from node node2 pod test-pod-9bfd19fd-20f2-4786-90ac-787de7272f8f container agnhost-container: STEP: delete the pod Oct 23 01:35:30.710: INFO: Waiting for pod test-pod-9bfd19fd-20f2-4786-90ac-787de7272f8f to disappear Oct 23 01:35:30.712: INFO: Pod test-pod-9bfd19fd-20f2-4786-90ac-787de7272f8f no longer exists [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:35:30.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-9854" for this suite. • [SLOW TEST:6.072 seconds] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount projected service account token [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":-1,"completed":22,"skipped":472,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:35:04.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Performing setup for networking test in namespace pod-network-test-1313 STEP: creating a selector STEP: Creating the service pods in kubernetes Oct 23 01:35:04.575: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Oct 23 01:35:04.613: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:35:06.616: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:35:08.616: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 23 01:35:10.616: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 23 01:35:12.617: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 23 01:35:14.617: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 23 01:35:16.618: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 23 01:35:18.617: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 23 01:35:20.619: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 23 01:35:22.617: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 23 01:35:24.615: INFO: The status of Pod netserver-0 is Running (Ready = true) Oct 23 01:35:24.620: INFO: The status of Pod netserver-1 is Running (Ready = false) Oct 23 01:35:26.623: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Oct 23 01:35:30.644: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Oct 23 01:35:30.644: INFO: Breadth first check of 10.244.3.12 on host 10.10.190.207... Oct 23 01:35:30.647: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.14:9080/dial?request=hostname&protocol=udp&host=10.244.3.12&port=8081&tries=1'] Namespace:pod-network-test-1313 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 01:35:30.647: INFO: >>> kubeConfig: /root/.kube/config Oct 23 01:35:30.776: INFO: Waiting for responses: map[] Oct 23 01:35:30.776: INFO: reached 10.244.3.12 after 0/1 tries Oct 23 01:35:30.776: INFO: Breadth first check of 10.244.4.106 on host 10.10.190.208... Oct 23 01:35:30.778: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.14:9080/dial?request=hostname&protocol=udp&host=10.244.4.106&port=8081&tries=1'] Namespace:pod-network-test-1313 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 01:35:30.778: INFO: >>> kubeConfig: /root/.kube/config Oct 23 01:35:30.879: INFO: Waiting for responses: map[] Oct 23 01:35:30.879: INFO: reached 10.244.4.106 after 0/1 tries Oct 23 01:35:30.879: INFO: Going to retry 0 out of 2 pods.... [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:35:30.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1313" for this suite. • [SLOW TEST:26.335 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":378,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:35:24.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Oct 23 01:35:24.958: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a93797b9-bb4b-43d8-94fb-347c34a26665" in namespace "projected-7083" to be "Succeeded or Failed" Oct 23 01:35:24.961: INFO: Pod "downwardapi-volume-a93797b9-bb4b-43d8-94fb-347c34a26665": Phase="Pending", Reason="", readiness=false. Elapsed: 3.028932ms Oct 23 01:35:26.966: INFO: Pod "downwardapi-volume-a93797b9-bb4b-43d8-94fb-347c34a26665": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007547718s Oct 23 01:35:28.969: INFO: Pod "downwardapi-volume-a93797b9-bb4b-43d8-94fb-347c34a26665": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010954375s Oct 23 01:35:30.973: INFO: Pod "downwardapi-volume-a93797b9-bb4b-43d8-94fb-347c34a26665": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014476854s STEP: Saw pod success Oct 23 01:35:30.973: INFO: Pod "downwardapi-volume-a93797b9-bb4b-43d8-94fb-347c34a26665" satisfied condition "Succeeded or Failed" Oct 23 01:35:30.975: INFO: Trying to get logs from node node2 pod downwardapi-volume-a93797b9-bb4b-43d8-94fb-347c34a26665 container client-container: STEP: delete the pod Oct 23 01:35:30.988: INFO: Waiting for pod downwardapi-volume-a93797b9-bb4b-43d8-94fb-347c34a26665 to disappear Oct 23 01:35:30.990: INFO: Pod downwardapi-volume-a93797b9-bb4b-43d8-94fb-347c34a26665 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:35:30.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7083" for this suite. • [SLOW TEST:6.074 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":140,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:34:32.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W1023 01:34:33.694460 30 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 01:35:35.710: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:35:35.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9574" for this suite. • [SLOW TEST:63.096 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":-1,"completed":11,"skipped":144,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:31:33.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod test-webserver-2f59f410-c7c2-4ba2-a4d8-0726690d56e6 in namespace container-probe-5995 Oct 23 01:31:39.939: INFO: Started pod test-webserver-2f59f410-c7c2-4ba2-a4d8-0726690d56e6 in namespace container-probe-5995 STEP: checking the pod's current state and verifying that restartCount is present Oct 23 01:31:39.942: INFO: Initial restart count of pod test-webserver-2f59f410-c7c2-4ba2-a4d8-0726690d56e6 is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:35:40.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5995" for this suite. • [SLOW TEST:246.627 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":232,"failed":0} S ------------------------------ [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:35:31.021: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-73ad7659-2432-40fb-98c2-f5b4920f88b6 STEP: Creating a pod to test consume secrets Oct 23 01:35:31.055: INFO: Waiting up to 5m0s for pod "pod-secrets-9779ccae-5d04-4aa6-ae0e-34a361314f5d" in namespace "secrets-877" to be "Succeeded or Failed" Oct 23 01:35:31.058: INFO: Pod "pod-secrets-9779ccae-5d04-4aa6-ae0e-34a361314f5d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.035764ms Oct 23 01:35:33.061: INFO: Pod "pod-secrets-9779ccae-5d04-4aa6-ae0e-34a361314f5d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006715958s Oct 23 01:35:35.064: INFO: Pod "pod-secrets-9779ccae-5d04-4aa6-ae0e-34a361314f5d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009409636s Oct 23 01:35:37.069: INFO: Pod "pod-secrets-9779ccae-5d04-4aa6-ae0e-34a361314f5d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014184198s Oct 23 01:35:39.074: INFO: Pod "pod-secrets-9779ccae-5d04-4aa6-ae0e-34a361314f5d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.019560091s Oct 23 01:35:41.080: INFO: Pod "pod-secrets-9779ccae-5d04-4aa6-ae0e-34a361314f5d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.02504306s Oct 23 01:35:43.082: INFO: Pod "pod-secrets-9779ccae-5d04-4aa6-ae0e-34a361314f5d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.027629106s STEP: Saw pod success Oct 23 01:35:43.082: INFO: Pod "pod-secrets-9779ccae-5d04-4aa6-ae0e-34a361314f5d" satisfied condition "Succeeded or Failed" Oct 23 01:35:43.085: INFO: Trying to get logs from node node2 pod pod-secrets-9779ccae-5d04-4aa6-ae0e-34a361314f5d container secret-env-test: STEP: delete the pod Oct 23 01:35:43.105: INFO: Waiting for pod pod-secrets-9779ccae-5d04-4aa6-ae0e-34a361314f5d to disappear Oct 23 01:35:43.107: INFO: Pod pod-secrets-9779ccae-5d04-4aa6-ae0e-34a361314f5d no longer exists [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:35:43.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-877" for this suite. • [SLOW TEST:12.094 seconds] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":152,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:35:25.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5766.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5766.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5766.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5766.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5766.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-5766.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5766.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-5766.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5766.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-5766.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5766.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-5766.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5766.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 217.9.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.9.217_udp@PTR;check="$$(dig +tcp +noall +answer +search 217.9.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.9.217_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5766.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5766.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5766.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5766.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5766.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-5766.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5766.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-5766.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5766.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-5766.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5766.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-5766.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5766.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 217.9.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.9.217_udp@PTR;check="$$(dig +tcp +noall +answer +search 217.9.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.9.217_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 23 01:35:39.881: INFO: Unable to read wheezy_udp@dns-test-service.dns-5766.svc.cluster.local from pod dns-5766/dns-test-0957999b-fde4-4500-a780-20775072ac34: the server could not find the requested resource (get pods dns-test-0957999b-fde4-4500-a780-20775072ac34) Oct 23 01:35:39.884: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5766.svc.cluster.local from pod dns-5766/dns-test-0957999b-fde4-4500-a780-20775072ac34: the server could not find the requested resource (get pods dns-test-0957999b-fde4-4500-a780-20775072ac34) Oct 23 01:35:39.887: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5766.svc.cluster.local from pod dns-5766/dns-test-0957999b-fde4-4500-a780-20775072ac34: the server could not find the requested resource (get pods dns-test-0957999b-fde4-4500-a780-20775072ac34) Oct 23 01:35:39.890: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5766.svc.cluster.local from pod dns-5766/dns-test-0957999b-fde4-4500-a780-20775072ac34: the server could not find the requested resource (get pods dns-test-0957999b-fde4-4500-a780-20775072ac34) Oct 23 01:35:39.909: INFO: Unable to read jessie_udp@dns-test-service.dns-5766.svc.cluster.local from pod dns-5766/dns-test-0957999b-fde4-4500-a780-20775072ac34: the server could not find the requested resource (get pods dns-test-0957999b-fde4-4500-a780-20775072ac34) Oct 23 01:35:39.918: INFO: Unable to read jessie_tcp@dns-test-service.dns-5766.svc.cluster.local from pod dns-5766/dns-test-0957999b-fde4-4500-a780-20775072ac34: the server could not find the requested resource (get pods dns-test-0957999b-fde4-4500-a780-20775072ac34) Oct 23 01:35:39.921: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5766.svc.cluster.local from pod dns-5766/dns-test-0957999b-fde4-4500-a780-20775072ac34: the server could not find the requested resource (get pods dns-test-0957999b-fde4-4500-a780-20775072ac34) Oct 23 01:35:39.927: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5766.svc.cluster.local from pod dns-5766/dns-test-0957999b-fde4-4500-a780-20775072ac34: the server could not find the requested resource (get pods dns-test-0957999b-fde4-4500-a780-20775072ac34) Oct 23 01:35:39.964: INFO: Lookups using dns-5766/dns-test-0957999b-fde4-4500-a780-20775072ac34 failed for: [wheezy_udp@dns-test-service.dns-5766.svc.cluster.local wheezy_tcp@dns-test-service.dns-5766.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5766.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5766.svc.cluster.local jessie_udp@dns-test-service.dns-5766.svc.cluster.local jessie_tcp@dns-test-service.dns-5766.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5766.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5766.svc.cluster.local] Oct 23 01:35:45.018: INFO: DNS probes using dns-5766/dns-test-0957999b-fde4-4500-a780-20775072ac34 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:35:45.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5766" for this suite. • [SLOW TEST:19.233 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":-1,"completed":23,"skipped":303,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:35:40.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Oct 23 01:35:45.592: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:35:45.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1471" for this suite. • [SLOW TEST:5.078 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":233,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:35:45.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0644 on tmpfs Oct 23 01:35:45.102: INFO: Waiting up to 5m0s for pod "pod-ebc52657-8fcd-4578-83ba-7f8c8cc1822f" in namespace "emptydir-4657" to be "Succeeded or Failed" Oct 23 01:35:45.107: INFO: Pod "pod-ebc52657-8fcd-4578-83ba-7f8c8cc1822f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.222974ms Oct 23 01:35:47.111: INFO: Pod "pod-ebc52657-8fcd-4578-83ba-7f8c8cc1822f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008926841s Oct 23 01:35:49.116: INFO: Pod "pod-ebc52657-8fcd-4578-83ba-7f8c8cc1822f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013991784s Oct 23 01:35:51.123: INFO: Pod "pod-ebc52657-8fcd-4578-83ba-7f8c8cc1822f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.021180107s STEP: Saw pod success Oct 23 01:35:51.123: INFO: Pod "pod-ebc52657-8fcd-4578-83ba-7f8c8cc1822f" satisfied condition "Succeeded or Failed" Oct 23 01:35:51.125: INFO: Trying to get logs from node node2 pod pod-ebc52657-8fcd-4578-83ba-7f8c8cc1822f container test-container: STEP: delete the pod Oct 23 01:35:51.164: INFO: Waiting for pod pod-ebc52657-8fcd-4578-83ba-7f8c8cc1822f to disappear Oct 23 01:35:51.166: INFO: Pod pod-ebc52657-8fcd-4578-83ba-7f8c8cc1822f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:35:51.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4657" for this suite. • [SLOW TEST:6.106 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":312,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:35:45.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-0f30b3bd-8764-4b31-9585-f99559ce31d3 STEP: Creating a pod to test consume configMaps Oct 23 01:35:45.663: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5b9dacf3-e94a-4e17-9aa0-deb2d0af833c" in namespace "projected-1116" to be "Succeeded or Failed" Oct 23 01:35:45.665: INFO: Pod "pod-projected-configmaps-5b9dacf3-e94a-4e17-9aa0-deb2d0af833c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.299993ms Oct 23 01:35:47.670: INFO: Pod "pod-projected-configmaps-5b9dacf3-e94a-4e17-9aa0-deb2d0af833c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007179577s Oct 23 01:35:49.675: INFO: Pod "pod-projected-configmaps-5b9dacf3-e94a-4e17-9aa0-deb2d0af833c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011923546s Oct 23 01:35:51.679: INFO: Pod "pod-projected-configmaps-5b9dacf3-e94a-4e17-9aa0-deb2d0af833c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016417916s STEP: Saw pod success Oct 23 01:35:51.679: INFO: Pod "pod-projected-configmaps-5b9dacf3-e94a-4e17-9aa0-deb2d0af833c" satisfied condition "Succeeded or Failed" Oct 23 01:35:51.682: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-5b9dacf3-e94a-4e17-9aa0-deb2d0af833c container agnhost-container: STEP: delete the pod Oct 23 01:35:51.695: INFO: Waiting for pod pod-projected-configmaps-5b9dacf3-e94a-4e17-9aa0-deb2d0af833c to disappear Oct 23 01:35:51.698: INFO: Pod pod-projected-configmaps-5b9dacf3-e94a-4e17-9aa0-deb2d0af833c no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:35:51.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1116" for this suite. • [SLOW TEST:6.078 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":243,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:35:43.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Oct 23 01:35:43.160: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f5952752-fecd-4f81-87b2-3e6728a9048a" in namespace "downward-api-6361" to be "Succeeded or Failed" Oct 23 01:35:43.162: INFO: Pod "downwardapi-volume-f5952752-fecd-4f81-87b2-3e6728a9048a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.13807ms Oct 23 01:35:45.165: INFO: Pod "downwardapi-volume-f5952752-fecd-4f81-87b2-3e6728a9048a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0047997s Oct 23 01:35:47.169: INFO: Pod "downwardapi-volume-f5952752-fecd-4f81-87b2-3e6728a9048a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008877236s Oct 23 01:35:49.172: INFO: Pod "downwardapi-volume-f5952752-fecd-4f81-87b2-3e6728a9048a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012215932s Oct 23 01:35:51.176: INFO: Pod "downwardapi-volume-f5952752-fecd-4f81-87b2-3e6728a9048a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.015955526s Oct 23 01:35:53.179: INFO: Pod "downwardapi-volume-f5952752-fecd-4f81-87b2-3e6728a9048a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.019723082s STEP: Saw pod success Oct 23 01:35:53.180: INFO: Pod "downwardapi-volume-f5952752-fecd-4f81-87b2-3e6728a9048a" satisfied condition "Succeeded or Failed" Oct 23 01:35:53.181: INFO: Trying to get logs from node node1 pod downwardapi-volume-f5952752-fecd-4f81-87b2-3e6728a9048a container client-container: STEP: delete the pod Oct 23 01:35:53.195: INFO: Waiting for pod downwardapi-volume-f5952752-fecd-4f81-87b2-3e6728a9048a to disappear Oct 23 01:35:53.196: INFO: Pod downwardapi-volume-f5952752-fecd-4f81-87b2-3e6728a9048a no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:35:53.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6361" for this suite. • [SLOW TEST:10.078 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":156,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSS ------------------------------ [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:35:07.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod Oct 23 01:35:07.897: INFO: PodSpec: initContainers in spec.initContainers Oct 23 01:35:57.161: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-f52a6c76-30bb-493e-8a75-5c41f4a141b6", GenerateName:"", Namespace:"init-container-6078", SelfLink:"", UID:"6ad0349b-ad78-4770-8698-0b76ce16c89b", ResourceVersion:"96652", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63770549707, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"897286200"}, Annotations:map[string]string{"k8s.v1.cni.cncf.io/network-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.13\"\n ],\n \"mac\": \"fa:2f:0d:3d:1a:50\",\n \"default\": true,\n \"dns\": {}\n}]", "k8s.v1.cni.cncf.io/networks-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.13\"\n ],\n \"mac\": \"fa:2f:0d:3d:1a:50\",\n \"default\": true,\n \"dns\": {}\n}]", "kubernetes.io/psp":"collectd"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003c11500), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003c11518)}, v1.ManagedFieldsEntry{Manager:"multus", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003c11530), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003c11548)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003c11560), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003c11578)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-pjm6h", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc004da1900), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-pjm6h", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-pjm6h", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.4.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-pjm6h", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000e09858), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"node1", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc003696620), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000e098e0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000e09900)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc000e09908), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc000e0990c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc0082384b0), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549707, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549707, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549707, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549707, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.10.190.207", PodIP:"10.244.3.13", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.3.13"}}, StartTime:(*v1.Time)(0xc003c115a8), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc003696700)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc003696770)}, Ready:false, RestartCount:3, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"docker-pullable://k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592", ContainerID:"docker://0e69af6fde65d48c447053074673dc16ff0dbd77c69d87b85fe208b394fe191c", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc004da1980), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc004da1960), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.4.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc000e0998f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:35:57.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6078" for this suite. • [SLOW TEST:49.295 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":-1,"completed":29,"skipped":544,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:35:57.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a test event STEP: listing all events in all namespaces STEP: patching the test event STEP: fetching the test event STEP: deleting the test event STEP: listing all events in all namespaces [AfterEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:35:57.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-1562" for this suite. • ------------------------------ {"msg":"PASSED [sig-instrumentation] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":30,"skipped":570,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:35:51.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0777 on tmpfs Oct 23 01:35:51.788: INFO: Waiting up to 5m0s for pod "pod-3823881d-c89d-4002-80a0-e4a945e591bf" in namespace "emptydir-999" to be "Succeeded or Failed" Oct 23 01:35:51.790: INFO: Pod "pod-3823881d-c89d-4002-80a0-e4a945e591bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.337035ms Oct 23 01:35:53.793: INFO: Pod "pod-3823881d-c89d-4002-80a0-e4a945e591bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005761997s Oct 23 01:35:55.797: INFO: Pod "pod-3823881d-c89d-4002-80a0-e4a945e591bf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009578155s Oct 23 01:35:57.802: INFO: Pod "pod-3823881d-c89d-4002-80a0-e4a945e591bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013984791s STEP: Saw pod success Oct 23 01:35:57.802: INFO: Pod "pod-3823881d-c89d-4002-80a0-e4a945e591bf" satisfied condition "Succeeded or Failed" Oct 23 01:35:57.804: INFO: Trying to get logs from node node1 pod pod-3823881d-c89d-4002-80a0-e4a945e591bf container test-container: STEP: delete the pod Oct 23 01:35:57.816: INFO: Waiting for pod pod-3823881d-c89d-4002-80a0-e4a945e591bf to disappear Oct 23 01:35:57.818: INFO: Pod pod-3823881d-c89d-4002-80a0-e4a945e591bf no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:35:57.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-999" for this suite. • [SLOW TEST:6.074 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":268,"failed":0} SSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:33:16.761: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-9999 STEP: creating service affinity-nodeport-transition in namespace services-9999 STEP: creating replication controller affinity-nodeport-transition in namespace services-9999 I1023 01:33:16.800401 32 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-9999, replica count: 3 I1023 01:33:19.852069 32 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1023 01:33:22.853129 32 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1023 01:33:25.854149 32 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 23 01:33:25.862: INFO: Creating new exec pod Oct 23 01:33:30.886: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Oct 23 01:33:31.148: INFO: stderr: "+ nc -v -t -w 2 affinity-nodeport-transition 80\n+ echo hostName\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Oct 23 01:33:31.148: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Oct 23 01:33:31.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.51.241 80' Oct 23 01:33:31.396: INFO: stderr: "+ nc -v -t -w 2 10.233.51.241 80\nConnection to 10.233.51.241 80 port [tcp/http] succeeded!\n+ echo hostName\n" Oct 23 01:33:31.396: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Oct 23 01:33:31.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:33:31.631: INFO: rc: 1 Oct 23 01:33:31.631: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:33:32.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:33:32.860: INFO: rc: 1 Oct 23 01:33:32.860: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:33:33.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:33:33.864: INFO: rc: 1 Oct 23 01:33:33.864: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:33:34.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:33:34.878: INFO: rc: 1 Oct 23 01:33:34.878: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:33:35.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:33:35.913: INFO: rc: 1 Oct 23 01:33:35.913: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:33:36.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:33:37.377: INFO: rc: 1 Oct 23 01:33:37.377: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:33:37.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:33:37.901: INFO: rc: 1 Oct 23 01:33:37.901: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:33:38.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:33:38.900: INFO: rc: 1 Oct 23 01:33:38.900: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:33:39.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:33:40.116: INFO: rc: 1 Oct 23 01:33:40.116: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:33:40.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:33:40.900: INFO: rc: 1 Oct 23 01:33:40.900: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:33:41.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:33:42.031: INFO: rc: 1 Oct 23 01:33:42.031: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:33:42.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:33:43.168: INFO: rc: 1 Oct 23 01:33:43.168: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:33:43.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:33:43.869: INFO: rc: 1 Oct 23 01:33:43.869: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:33:44.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:33:44.940: INFO: rc: 1 Oct 23 01:33:44.941: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:33:45.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:33:45.905: INFO: rc: 1 Oct 23 01:33:45.905: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:33:46.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:33:46.862: INFO: rc: 1 Oct 23 01:33:46.862: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:33:47.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:33:47.885: INFO: rc: 1 Oct 23 01:33:47.885: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:33:48.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:33:50.162: INFO: rc: 1 Oct 23 01:33:50.162: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:33:50.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:33:50.900: INFO: rc: 1 Oct 23 01:33:50.900: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:33:51.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:33:51.914: INFO: rc: 1 Oct 23 01:33:51.914: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:33:52.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:33:53.064: INFO: rc: 1 Oct 23 01:33:53.064: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:33:53.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:33:53.881: INFO: rc: 1 Oct 23 01:33:53.881: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:33:54.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:33:54.930: INFO: rc: 1 Oct 23 01:33:54.930: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:33:55.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:33:56.180: INFO: rc: 1 Oct 23 01:33:56.180: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:33:56.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:33:56.914: INFO: rc: 1 Oct 23 01:33:56.914: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:33:57.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:33:57.881: INFO: rc: 1 Oct 23 01:33:57.881: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:33:58.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:33:58.868: INFO: rc: 1 Oct 23 01:33:58.868: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:33:59.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:33:59.909: INFO: rc: 1 Oct 23 01:33:59.909: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:34:00.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:34:00.892: INFO: rc: 1 Oct 23 01:34:00.892: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:34:01.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:34:02.422: INFO: rc: 1 Oct 23 01:34:02.422: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:34:02.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:34:02.929: INFO: rc: 1 Oct 23 01:34:02.929: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:34:03.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:34:03.877: INFO: rc: 1 Oct 23 01:34:03.877: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:34:04.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:34:04.881: INFO: rc: 1 Oct 23 01:34:04.881: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:34:05.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:34:06.017: INFO: rc: 1 Oct 23 01:34:06.017: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:34:06.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:34:06.851: INFO: rc: 1 Oct 23 01:34:06.851: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:34:07.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:34:07.865: INFO: rc: 1 Oct 23 01:34:07.865: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:34:08.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:34:08.964: INFO: rc: 1 Oct 23 01:34:08.964: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:34:09.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:34:10.274: INFO: rc: 1 Oct 23 01:34:10.274: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:34:10.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:34:10.922: INFO: rc: 1 Oct 23 01:34:10.922: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:34:11.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:34:11.873: INFO: rc: 1 Oct 23 01:34:11.873: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:34:12.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:34:12.873: INFO: rc: 1 Oct 23 01:34:12.873: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:34:13.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:34:13.886: INFO: rc: 1 Oct 23 01:34:13.886: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:34:14.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:34:14.939: INFO: rc: 1 Oct 23 01:34:14.939: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:34:15.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:34:15.875: INFO: rc: 1 Oct 23 01:34:15.875: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:34:16.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:34:16.874: INFO: rc: 1 Oct 23 01:34:16.874: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:34:17.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:34:17.872: INFO: rc: 1 Oct 23 01:34:17.872: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:34:18.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:34:19.748: INFO: rc: 1 Oct 23 01:34:19.748: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:34:20.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:34:21.027: INFO: rc: 1 Oct 23 01:34:21.027: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:34:21.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:34:22.283: INFO: rc: 1 Oct 23 01:34:22.283: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:34:22.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:34:23.357: INFO: rc: 1 Oct 23 01:34:23.357: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:34:23.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:34:23.978: INFO: rc: 1 Oct 23 01:34:23.978: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:34:24.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:34:24.963: INFO: rc: 1 Oct 23 01:34:24.963: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:34:25.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:34:26.365: INFO: rc: 1 Oct 23 01:34:26.365: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:34:26.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:34:27.166: INFO: rc: 1 Oct 23 01:34:27.166: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:34:27.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:34:27.868: INFO: rc: 1 Oct 23 01:34:27.868: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:34:28.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:34:28.999: INFO: rc: 1 Oct 23 01:34:28.999: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:34:29.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:34:30.043: INFO: rc: 1 Oct 23 01:34:30.043: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:34:30.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:34:31.031: INFO: rc: 1 Oct 23 01:34:31.031: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:34:31.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:34:32.028: INFO: rc: 1 Oct 23 01:34:32.028: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:34:32.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:34:33.099: INFO: rc: 1 Oct 23 01:34:33.099: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:34:33.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:34:33.917: INFO: rc: 1 Oct 23 01:34:33.917: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:34:34.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:34:34.884: INFO: rc: 1 Oct 23 01:34:34.884: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:34:35.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:34:36.379: INFO: rc: 1 Oct 23 01:34:36.379: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:34:36.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:34:37.104: INFO: rc: 1 Oct 23 01:34:37.104: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:34:37.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:34:37.905: INFO: rc: 1 Oct 23 01:34:37.905: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:34:38.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:34:39.269: INFO: rc: 1 Oct 23 01:34:39.269: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:34:39.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:34:39.894: INFO: rc: 1 Oct 23 01:34:39.894: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:34:40.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:34:41.305: INFO: rc: 1 Oct 23 01:34:41.305: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:34:41.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:34:42.172: INFO: rc: 1 Oct 23 01:34:42.172: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:34:42.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:34:43.050: INFO: rc: 1 Oct 23 01:34:43.050: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:34:43.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:34:43.922: INFO: rc: 1 Oct 23 01:34:43.922: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:34:44.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:34:44.948: INFO: rc: 1 Oct 23 01:34:44.948: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:34:45.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:34:46.100: INFO: rc: 1 Oct 23 01:34:46.101: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:34:46.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:34:46.978: INFO: rc: 1 Oct 23 01:34:46.978: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:34:47.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:34:47.889: INFO: rc: 1 Oct 23 01:34:47.889: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:34:48.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:34:50.071: INFO: rc: 1 Oct 23 01:34:50.071: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:34:50.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:34:51.005: INFO: rc: 1 Oct 23 01:34:51.005: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:34:51.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:34:52.094: INFO: rc: 1 Oct 23 01:34:52.094: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:34:52.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:34:53.027: INFO: rc: 1 Oct 23 01:34:53.027: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:34:53.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:34:53.881: INFO: rc: 1 Oct 23 01:34:53.881: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:34:54.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:34:54.877: INFO: rc: 1 Oct 23 01:34:54.877: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:34:55.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:34:55.939: INFO: rc: 1 Oct 23 01:34:55.939: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:34:56.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:34:57.031: INFO: rc: 1 Oct 23 01:34:57.031: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:34:57.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:34:57.913: INFO: rc: 1 Oct 23 01:34:57.913: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:34:58.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:34:58.909: INFO: rc: 1 Oct 23 01:34:58.909: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:34:59.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:34:59.888: INFO: rc: 1 Oct 23 01:34:59.888: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:00.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:35:01.002: INFO: rc: 1 Oct 23 01:35:01.002: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:01.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:35:01.878: INFO: rc: 1 Oct 23 01:35:01.878: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:02.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:35:02.852: INFO: rc: 1 Oct 23 01:35:02.852: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:03.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:35:03.888: INFO: rc: 1 Oct 23 01:35:03.888: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:04.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:35:04.851: INFO: rc: 1 Oct 23 01:35:04.851: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:05.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:35:06.093: INFO: rc: 1 Oct 23 01:35:06.093: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:06.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:35:07.039: INFO: rc: 1 Oct 23 01:35:07.039: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:07.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:35:08.117: INFO: rc: 1 Oct 23 01:35:08.117: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:08.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:35:08.942: INFO: rc: 1 Oct 23 01:35:08.942: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:09.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:35:09.945: INFO: rc: 1 Oct 23 01:35:09.945: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:10.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:35:10.903: INFO: rc: 1 Oct 23 01:35:10.903: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:11.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:35:11.975: INFO: rc: 1 Oct 23 01:35:11.975: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:12.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:35:12.930: INFO: rc: 1 Oct 23 01:35:12.930: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:13.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:35:13.877: INFO: rc: 1 Oct 23 01:35:13.877: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:14.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:35:14.885: INFO: rc: 1 Oct 23 01:35:14.886: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:15.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:35:15.859: INFO: rc: 1 Oct 23 01:35:15.859: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:16.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:35:17.024: INFO: rc: 1 Oct 23 01:35:17.024: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:17.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:35:18.007: INFO: rc: 1 Oct 23 01:35:18.007: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:18.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:35:19.872: INFO: rc: 1 Oct 23 01:35:19.872: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:20.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:35:20.864: INFO: rc: 1 Oct 23 01:35:20.865: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:21.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:35:22.406: INFO: rc: 1 Oct 23 01:35:22.406: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:22.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:35:22.922: INFO: rc: 1 Oct 23 01:35:22.922: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:23.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:35:23.903: INFO: rc: 1 Oct 23 01:35:23.903: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:24.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:35:24.879: INFO: rc: 1 Oct 23 01:35:24.879: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:25.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:35:26.699: INFO: rc: 1 Oct 23 01:35:26.699: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:27.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:35:28.304: INFO: rc: 1 Oct 23 01:35:28.304: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:28.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:35:29.302: INFO: rc: 1 Oct 23 01:35:29.302: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:29.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:35:30.523: INFO: rc: 1 Oct 23 01:35:30.523: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:30.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:35:31.050: INFO: rc: 1 Oct 23 01:35:31.050: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:31.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:35:32.054: INFO: rc: 1 Oct 23 01:35:32.054: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:32.055: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Oct 23 01:35:32.412: INFO: rc: 1 Oct 23 01:35:32.412: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9999 exec execpod-affinityq59m4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 + echo hostName nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:32.413: FAIL: Unexpected error: <*errors.errorString | 0xc003642c10>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31195 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31195 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithOptionalTransition(0xc0018e11e0, 0x779f8f8, 0xc0011df8c0, 0xc0016eb680, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2572 +0x625 k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithTransition(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2527 k8s.io/kubernetes/test/e2e/network.glob..func24.27() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1862 +0xa5 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001983500) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc001983500) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc001983500, 0x70e7b58) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 Oct 23 01:35:32.414: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-9999, will wait for the garbage collector to delete the pods Oct 23 01:35:32.490: INFO: Deleting ReplicationController affinity-nodeport-transition took: 13.37157ms Oct 23 01:35:32.590: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 100.328493ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-9999". STEP: Found 27 events. Oct 23 01:35:56.907: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-transition-9swz2: { } Scheduled: Successfully assigned services-9999/affinity-nodeport-transition-9swz2 to node2 Oct 23 01:35:56.907: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-transition-srdpr: { } Scheduled: Successfully assigned services-9999/affinity-nodeport-transition-srdpr to node1 Oct 23 01:35:56.907: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-transition-vrh2v: { } Scheduled: Successfully assigned services-9999/affinity-nodeport-transition-vrh2v to node2 Oct 23 01:35:56.907: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpod-affinityq59m4: { } Scheduled: Successfully assigned services-9999/execpod-affinityq59m4 to node2 Oct 23 01:35:56.907: INFO: At 2021-10-23 01:33:16 +0000 UTC - event for affinity-nodeport-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-transition-vrh2v Oct 23 01:35:56.907: INFO: At 2021-10-23 01:33:16 +0000 UTC - event for affinity-nodeport-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-transition-srdpr Oct 23 01:35:56.907: INFO: At 2021-10-23 01:33:16 +0000 UTC - event for affinity-nodeport-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-transition-9swz2 Oct 23 01:35:56.907: INFO: At 2021-10-23 01:33:18 +0000 UTC - event for affinity-nodeport-transition-srdpr: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 23 01:35:56.907: INFO: At 2021-10-23 01:33:18 +0000 UTC - event for affinity-nodeport-transition-srdpr: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 323.547919ms Oct 23 01:35:56.907: INFO: At 2021-10-23 01:33:18 +0000 UTC - event for affinity-nodeport-transition-srdpr: {kubelet node1} Created: Created container affinity-nodeport-transition Oct 23 01:35:56.907: INFO: At 2021-10-23 01:33:18 +0000 UTC - event for affinity-nodeport-transition-srdpr: {kubelet node1} Started: Started container affinity-nodeport-transition Oct 23 01:35:56.907: INFO: At 2021-10-23 01:33:20 +0000 UTC - event for affinity-nodeport-transition-vrh2v: {kubelet node2} Created: Created container affinity-nodeport-transition Oct 23 01:35:56.907: INFO: At 2021-10-23 01:33:20 +0000 UTC - event for affinity-nodeport-transition-vrh2v: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 320.738359ms Oct 23 01:35:56.907: INFO: At 2021-10-23 01:33:20 +0000 UTC - event for affinity-nodeport-transition-vrh2v: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 23 01:35:56.907: INFO: At 2021-10-23 01:33:21 +0000 UTC - event for affinity-nodeport-transition-9swz2: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 341.796891ms Oct 23 01:35:56.907: INFO: At 2021-10-23 01:33:21 +0000 UTC - event for affinity-nodeport-transition-9swz2: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 23 01:35:56.907: INFO: At 2021-10-23 01:33:21 +0000 UTC - event for affinity-nodeport-transition-9swz2: {kubelet node2} Created: Created container affinity-nodeport-transition Oct 23 01:35:56.907: INFO: At 2021-10-23 01:33:21 +0000 UTC - event for affinity-nodeport-transition-vrh2v: {kubelet node2} Started: Started container affinity-nodeport-transition Oct 23 01:35:56.907: INFO: At 2021-10-23 01:33:22 +0000 UTC - event for affinity-nodeport-transition-9swz2: {kubelet node2} Started: Started container affinity-nodeport-transition Oct 23 01:35:56.907: INFO: At 2021-10-23 01:33:27 +0000 UTC - event for execpod-affinityq59m4: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 23 01:35:56.907: INFO: At 2021-10-23 01:33:28 +0000 UTC - event for execpod-affinityq59m4: {kubelet node2} Started: Started container agnhost-container Oct 23 01:35:56.907: INFO: At 2021-10-23 01:33:28 +0000 UTC - event for execpod-affinityq59m4: {kubelet node2} Created: Created container agnhost-container Oct 23 01:35:56.907: INFO: At 2021-10-23 01:33:28 +0000 UTC - event for execpod-affinityq59m4: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 507.052385ms Oct 23 01:35:56.907: INFO: At 2021-10-23 01:35:32 +0000 UTC - event for affinity-nodeport-transition-9swz2: {kubelet node2} Killing: Stopping container affinity-nodeport-transition Oct 23 01:35:56.907: INFO: At 2021-10-23 01:35:32 +0000 UTC - event for affinity-nodeport-transition-srdpr: {kubelet node1} Killing: Stopping container affinity-nodeport-transition Oct 23 01:35:56.907: INFO: At 2021-10-23 01:35:32 +0000 UTC - event for affinity-nodeport-transition-vrh2v: {kubelet node2} Killing: Stopping container affinity-nodeport-transition Oct 23 01:35:56.907: INFO: At 2021-10-23 01:35:32 +0000 UTC - event for execpod-affinityq59m4: {kubelet node2} Killing: Stopping container agnhost-container Oct 23 01:35:56.909: INFO: POD NODE PHASE GRACE CONDITIONS Oct 23 01:35:56.909: INFO: Oct 23 01:35:56.913: INFO: Logging node info for node master1 Oct 23 01:35:56.915: INFO: Node Info: &Node{ObjectMeta:{master1 1b0e9b6c-fa73-4303-880f-3c662903b3ba 96394 0 2021-10-22 21:03:37 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-22 21:03:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2021-10-22 21:03:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2021-10-22 21:06:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2021-10-22 21:11:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:07 +0000 UTC,LastTransitionTime:2021-10-22 21:09:07 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 01:35:48 +0000 UTC,LastTransitionTime:2021-10-22 21:03:34 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 01:35:48 +0000 UTC,LastTransitionTime:2021-10-22 21:03:34 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 01:35:48 +0000 UTC,LastTransitionTime:2021-10-22 21:03:34 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 01:35:48 +0000 UTC,LastTransitionTime:2021-10-22 21:09:03 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:30ce143f9c9243b59253027a77cdbf77,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:e78651c4-73ca-42e7-8083-bc7c7ebac4b6,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:519ce66d3ef90d7545f5679b670aa50393adfbe9785a720ba26ce3ec4b263c5d tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:2012644549052fa07c43b0d19f320c871a25e105d0b23e33645e4f1bcf8fcd97 nginx:1.20.1-alpine],SizeBytes:22650454,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 01:35:56.916: INFO: Logging kubelet events for node master1 Oct 23 01:35:56.918: INFO: Logging pods the kubelet thinks is on node master1 Oct 23 01:35:56.942: INFO: kube-flannel-8vnf2 started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 01:35:56.942: INFO: Init container install-cni ready: true, restart count 1 Oct 23 01:35:56.942: INFO: Container kube-flannel ready: true, restart count 1 Oct 23 01:35:56.942: INFO: kube-multus-ds-amd64-vl8qj started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 01:35:56.942: INFO: Container kube-multus ready: true, restart count 1 Oct 23 01:35:56.942: INFO: coredns-8474476ff8-q8d8x started at 2021-10-22 21:07:01 +0000 UTC (0+1 container statuses recorded) Oct 23 01:35:56.942: INFO: Container coredns ready: true, restart count 2 Oct 23 01:35:56.942: INFO: container-registry-65d7c44b96-wtz5j started at 2021-10-22 21:10:37 +0000 UTC (0+2 container statuses recorded) Oct 23 01:35:56.942: INFO: Container docker-registry ready: true, restart count 0 Oct 23 01:35:56.942: INFO: Container nginx ready: true, restart count 0 Oct 23 01:35:56.942: INFO: node-exporter-fxb7q started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 01:35:56.942: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 01:35:56.942: INFO: Container node-exporter ready: true, restart count 0 Oct 23 01:35:56.942: INFO: kube-apiserver-master1 started at 2021-10-22 21:09:03 +0000 UTC (0+1 container statuses recorded) Oct 23 01:35:56.942: INFO: Container kube-apiserver ready: true, restart count 0 Oct 23 01:35:56.942: INFO: kube-controller-manager-master1 started at 2021-10-22 21:12:54 +0000 UTC (0+1 container statuses recorded) Oct 23 01:35:56.942: INFO: Container kube-controller-manager ready: true, restart count 1 Oct 23 01:35:56.942: INFO: kube-proxy-fhqkt started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 01:35:56.942: INFO: Container kube-proxy ready: true, restart count 1 Oct 23 01:35:56.942: INFO: kube-scheduler-master1 started at 2021-10-22 21:22:33 +0000 UTC (0+1 container statuses recorded) Oct 23 01:35:56.942: INFO: Container kube-scheduler ready: true, restart count 0 W1023 01:35:56.955705 32 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 01:35:57.147: INFO: Latency metrics for node master1 Oct 23 01:35:57.147: INFO: Logging node info for node master2 Oct 23 01:35:57.149: INFO: Node Info: &Node{ObjectMeta:{master2 48070097-b11c-473d-9240-f4ee02bd7e2f 96527 0 2021-10-22 21:04:08 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-22 21:04:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-10-22 21:17:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:14 +0000 UTC,LastTransitionTime:2021-10-22 21:09:14 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 01:35:52 +0000 UTC,LastTransitionTime:2021-10-22 21:04:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 01:35:52 +0000 UTC,LastTransitionTime:2021-10-22 21:04:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 01:35:52 +0000 UTC,LastTransitionTime:2021-10-22 21:04:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 01:35:52 +0000 UTC,LastTransitionTime:2021-10-22 21:06:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c5d510cf1060448cb87a1d02cd1f2972,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:8ec7c43d-60d2-4abb-84a1-5a37f3283118,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 01:35:57.150: INFO: Logging kubelet events for node master2 Oct 23 01:35:57.153: INFO: Logging pods the kubelet thinks is on node master2 Oct 23 01:35:57.167: INFO: kube-proxy-2xlf2 started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 01:35:57.167: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 01:35:57.167: INFO: kube-flannel-tfkj9 started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 01:35:57.167: INFO: Init container install-cni ready: true, restart count 2 Oct 23 01:35:57.167: INFO: Container kube-flannel ready: true, restart count 1 Oct 23 01:35:57.167: INFO: kube-multus-ds-amd64-m8ztc started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 01:35:57.167: INFO: Container kube-multus ready: true, restart count 1 Oct 23 01:35:57.167: INFO: kube-controller-manager-master2 started at 2021-10-22 21:12:54 +0000 UTC (0+1 container statuses recorded) Oct 23 01:35:57.167: INFO: Container kube-controller-manager ready: true, restart count 2 Oct 23 01:35:57.167: INFO: kube-scheduler-master2 started at 2021-10-22 21:12:54 +0000 UTC (0+1 container statuses recorded) Oct 23 01:35:57.167: INFO: Container kube-scheduler ready: true, restart count 2 Oct 23 01:35:57.167: INFO: node-exporter-vljkh started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 01:35:57.167: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 01:35:57.167: INFO: Container node-exporter ready: true, restart count 0 Oct 23 01:35:57.167: INFO: kube-apiserver-master2 started at 2021-10-22 21:04:46 +0000 UTC (0+1 container statuses recorded) Oct 23 01:35:57.167: INFO: Container kube-apiserver ready: true, restart count 0 Oct 23 01:35:57.167: INFO: dns-autoscaler-7df78bfcfb-9ss69 started at 2021-10-22 21:06:58 +0000 UTC (0+1 container statuses recorded) Oct 23 01:35:57.167: INFO: Container autoscaler ready: true, restart count 1 W1023 01:35:57.182949 32 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 01:35:57.245: INFO: Latency metrics for node master2 Oct 23 01:35:57.245: INFO: Logging node info for node master3 Oct 23 01:35:57.248: INFO: Node Info: &Node{ObjectMeta:{master3 fe22a467-e2de-4b64-9399-d274e6d13231 96366 0 2021-10-22 21:04:18 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-22 21:04:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-10-22 21:14:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2021-10-22 21:14:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:08 +0000 UTC,LastTransitionTime:2021-10-22 21:09:08 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 01:35:48 +0000 UTC,LastTransitionTime:2021-10-22 21:04:18 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 01:35:48 +0000 UTC,LastTransitionTime:2021-10-22 21:04:18 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 01:35:48 +0000 UTC,LastTransitionTime:2021-10-22 21:04:18 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 01:35:48 +0000 UTC,LastTransitionTime:2021-10-22 21:09:03 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:55ed55d7ecb94c5fbcecb32cb3747801,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:7e00baa8-f631-4d7e-baa1-cb915fbb1ea7,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 01:35:57.248: INFO: Logging kubelet events for node master3 Oct 23 01:35:57.251: INFO: Logging pods the kubelet thinks is on node master3 Oct 23 01:35:57.264: INFO: kube-scheduler-master3 started at 2021-10-22 21:04:46 +0000 UTC (0+1 container statuses recorded) Oct 23 01:35:57.264: INFO: Container kube-scheduler ready: true, restart count 2 Oct 23 01:35:57.264: INFO: kube-multus-ds-amd64-tfbmd started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 01:35:57.264: INFO: Container kube-multus ready: true, restart count 1 Oct 23 01:35:57.264: INFO: coredns-8474476ff8-7wlfp started at 2021-10-22 21:06:56 +0000 UTC (0+1 container statuses recorded) Oct 23 01:35:57.264: INFO: Container coredns ready: true, restart count 2 Oct 23 01:35:57.264: INFO: kube-apiserver-master3 started at 2021-10-22 21:09:03 +0000 UTC (0+1 container statuses recorded) Oct 23 01:35:57.264: INFO: Container kube-apiserver ready: true, restart count 0 Oct 23 01:35:57.264: INFO: kube-controller-manager-master3 started at 2021-10-22 21:09:03 +0000 UTC (0+1 container statuses recorded) Oct 23 01:35:57.264: INFO: Container kube-controller-manager ready: true, restart count 2 Oct 23 01:35:57.264: INFO: kube-proxy-l7st4 started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 01:35:57.264: INFO: Container kube-proxy ready: true, restart count 1 Oct 23 01:35:57.264: INFO: kube-flannel-rf9mv started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 01:35:57.264: INFO: Init container install-cni ready: true, restart count 1 Oct 23 01:35:57.264: INFO: Container kube-flannel ready: true, restart count 1 Oct 23 01:35:57.264: INFO: node-feature-discovery-controller-cff799f9f-dgsfd started at 2021-10-22 21:14:11 +0000 UTC (0+1 container statuses recorded) Oct 23 01:35:57.264: INFO: Container nfd-controller ready: true, restart count 0 Oct 23 01:35:57.264: INFO: node-exporter-b22mw started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 01:35:57.264: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 01:35:57.264: INFO: Container node-exporter ready: true, restart count 0 W1023 01:35:57.280412 32 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 01:35:57.349: INFO: Latency metrics for node master3 Oct 23 01:35:57.349: INFO: Logging node info for node node1 Oct 23 01:35:57.352: INFO: Node Info: &Node{ObjectMeta:{node1 1c590bf6-8845-4681-8fa1-7acc55183d29 96391 0 2021-10-22 21:05:23 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-10-22 21:05:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-10-22 21:05:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-22 21:14:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-22 21:17:46 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-10-23 01:19:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:10 +0000 UTC,LastTransitionTime:2021-10-22 21:09:10 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 01:35:48 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 01:35:48 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 01:35:48 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 01:35:48 +0000 UTC,LastTransitionTime:2021-10-22 21:06:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f11a4b4c58ac4a4eb06ac043edeefa84,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:50e64d70-ffd2-496a-957a-81f1931a6b6e,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003429679,},ContainerImage{Names:[localhost:30500/cmk@sha256:ba2eda55192ece5488254511709b932e8a99f600af8261a9f2a89d0dbc9b8fec cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:c3256608afd18299ac7559d97ec0a80149d265b35d2eeeb33a053826e486886a nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:a296b4c6f6ee2b88f095b61e95c7ef4f51ba25598835b4978c9256d8c8ace48a alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 01:35:57.353: INFO: Logging kubelet events for node node1 Oct 23 01:35:57.355: INFO: Logging pods the kubelet thinks is on node node1 Oct 23 01:35:57.371: INFO: prometheus-k8s-0 started at 2021-10-22 21:19:48 +0000 UTC (0+4 container statuses recorded) Oct 23 01:35:57.372: INFO: Container config-reloader ready: true, restart count 0 Oct 23 01:35:57.372: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 23 01:35:57.372: INFO: Container grafana ready: true, restart count 0 Oct 23 01:35:57.372: INFO: Container prometheus ready: true, restart count 1 Oct 23 01:35:57.372: INFO: collectd-n9sbv started at 2021-10-22 21:23:20 +0000 UTC (0+3 container statuses recorded) Oct 23 01:35:57.372: INFO: Container collectd ready: true, restart count 0 Oct 23 01:35:57.372: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 01:35:57.372: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 01:35:57.372: INFO: nodeport-test-ml4fp started at 2021-10-23 01:34:35 +0000 UTC (0+1 container statuses recorded) Oct 23 01:35:57.372: INFO: Container nodeport-test ready: true, restart count 0 Oct 23 01:35:57.372: INFO: kubernetes-dashboard-785dcbb76d-kc4kh started at 2021-10-22 21:07:01 +0000 UTC (0+1 container statuses recorded) Oct 23 01:35:57.372: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 23 01:35:57.372: INFO: prometheus-operator-585ccfb458-hwjk2 started at 2021-10-22 21:19:21 +0000 UTC (0+2 container statuses recorded) Oct 23 01:35:57.372: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 01:35:57.372: INFO: Container prometheus-operator ready: true, restart count 0 Oct 23 01:35:57.372: INFO: pod-init-f52a6c76-30bb-493e-8a75-5c41f4a141b6 started at 2021-10-23 01:35:07 +0000 UTC (2+1 container statuses recorded) Oct 23 01:35:57.372: INFO: Init container init1 ready: false, restart count 3 Oct 23 01:35:57.372: INFO: Init container init2 ready: false, restart count 0 Oct 23 01:35:57.372: INFO: Container run1 ready: false, restart count 0 Oct 23 01:35:57.372: INFO: pod-3823881d-c89d-4002-80a0-e4a945e591bf started at 2021-10-23 01:35:51 +0000 UTC (0+1 container statuses recorded) Oct 23 01:35:57.372: INFO: Container test-container ready: false, restart count 0 Oct 23 01:35:57.372: INFO: kube-multus-ds-amd64-l97s4 started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 01:35:57.372: INFO: Container kube-multus ready: true, restart count 1 Oct 23 01:35:57.372: INFO: node-feature-discovery-worker-2pvq5 started at 2021-10-22 21:14:11 +0000 UTC (0+1 container statuses recorded) Oct 23 01:35:57.372: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 01:35:57.372: INFO: simpletest-rc-to-be-deleted-5xp7k started at 2021-10-23 01:35:51 +0000 UTC (0+1 container statuses recorded) Oct 23 01:35:57.372: INFO: Container nginx ready: true, restart count 0 Oct 23 01:35:57.372: INFO: kube-proxy-m9z8s started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 01:35:57.372: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 01:35:57.372: INFO: cmk-t9r2t started at 2021-10-22 21:18:25 +0000 UTC (0+2 container statuses recorded) Oct 23 01:35:57.372: INFO: Container nodereport ready: true, restart count 0 Oct 23 01:35:57.372: INFO: Container reconcile ready: true, restart count 0 Oct 23 01:35:57.372: INFO: foo-lpv2f started at 2021-10-23 01:35:35 +0000 UTC (0+1 container statuses recorded) Oct 23 01:35:57.372: INFO: Container c ready: true, restart count 0 Oct 23 01:35:57.372: INFO: cmk-init-discover-node1-c599w started at 2021-10-22 21:17:43 +0000 UTC (0+3 container statuses recorded) Oct 23 01:35:57.372: INFO: Container discover ready: false, restart count 0 Oct 23 01:35:57.372: INFO: Container init ready: false, restart count 0 Oct 23 01:35:57.372: INFO: Container install ready: false, restart count 0 Oct 23 01:35:57.372: INFO: nodeport-test-dzdbc started at 2021-10-23 01:34:35 +0000 UTC (0+1 container statuses recorded) Oct 23 01:35:57.372: INFO: Container nodeport-test ready: true, restart count 0 Oct 23 01:35:57.372: INFO: nginx-proxy-node1 started at 2021-10-22 21:05:23 +0000 UTC (0+1 container statuses recorded) Oct 23 01:35:57.372: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 01:35:57.372: INFO: kubernetes-metrics-scraper-5558854cb-dfn2n started at 2021-10-22 21:07:01 +0000 UTC (0+1 container statuses recorded) Oct 23 01:35:57.372: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 23 01:35:57.372: INFO: node-exporter-v656r started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 01:35:57.372: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 01:35:57.372: INFO: Container node-exporter ready: true, restart count 0 Oct 23 01:35:57.372: INFO: kube-flannel-2cdvd started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 01:35:57.372: INFO: Init container install-cni ready: true, restart count 2 Oct 23 01:35:57.372: INFO: Container kube-flannel ready: true, restart count 3 Oct 23 01:35:57.372: INFO: foo-t45q6 started at 2021-10-23 01:35:35 +0000 UTC (0+1 container statuses recorded) Oct 23 01:35:57.372: INFO: Container c ready: true, restart count 0 Oct 23 01:35:57.372: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-sjjtd started at 2021-10-22 21:15:26 +0000 UTC (0+1 container statuses recorded) Oct 23 01:35:57.372: INFO: Container kube-sriovdp ready: true, restart count 0 W1023 01:35:57.387749 32 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 01:35:57.611: INFO: Latency metrics for node node1 Oct 23 01:35:57.611: INFO: Logging node info for node node2 Oct 23 01:35:57.615: INFO: Node Info: &Node{ObjectMeta:{node2 bdba54c1-d4eb-4c09-a343-50f320ccb048 96578 0 2021-10-22 21:05:23 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-10-22 21:05:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-10-22 21:05:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-22 21:14:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-22 21:18:08 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-10-23 01:20:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:example.com/fakecpu":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {e2e.test Update v1 2021-10-23 01:28:00 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:08 +0000 UTC,LastTransitionTime:2021-10-22 21:09:08 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 01:35:55 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 01:35:55 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 01:35:55 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 01:35:55 +0000 UTC,LastTransitionTime:2021-10-22 21:06:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:82312646736a4d47a5e2182417308818,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:045f38e2-ca45-4931-a8ac-a14f5e34cbd2,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[localhost:30500/cmk@sha256:ba2eda55192ece5488254511709b932e8a99f600af8261a9f2a89d0dbc9b8fec localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:c3256608afd18299ac7559d97ec0a80149d265b35d2eeeb33a053826e486886a localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[localhost:30500/tasextender@sha256:519ce66d3ef90d7545f5679b670aa50393adfbe9785a720ba26ce3ec4b263c5d localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 01:35:57.616: INFO: Logging kubelet events for node node2 Oct 23 01:35:57.620: INFO: Logging pods the kubelet thinks is on node node2 Oct 23 01:35:57.634: INFO: kube-proxy-5h2bl started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 01:35:57.634: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 01:35:57.634: INFO: cmk-init-discover-node2-2btnq started at 2021-10-22 21:18:03 +0000 UTC (0+3 container statuses recorded) Oct 23 01:35:57.634: INFO: Container discover ready: false, restart count 0 Oct 23 01:35:57.634: INFO: Container init ready: false, restart count 0 Oct 23 01:35:57.634: INFO: Container install ready: false, restart count 0 Oct 23 01:35:57.634: INFO: cmk-webhook-6c9d5f8578-pkwhc started at 2021-10-22 21:18:26 +0000 UTC (0+1 container statuses recorded) Oct 23 01:35:57.634: INFO: Container cmk-webhook ready: true, restart count 0 Oct 23 01:35:57.634: INFO: kube-multus-ds-amd64-fww5b started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 01:35:57.634: INFO: Container kube-multus ready: true, restart count 1 Oct 23 01:35:57.634: INFO: node-feature-discovery-worker-8k8m5 started at 2021-10-22 21:14:11 +0000 UTC (0+1 container statuses recorded) Oct 23 01:35:57.634: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 01:35:57.634: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zhcfq started at 2021-10-22 21:15:26 +0000 UTC (0+1 container statuses recorded) Oct 23 01:35:57.634: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 01:35:57.634: INFO: node-exporter-fjc79 started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 01:35:57.634: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 01:35:57.634: INFO: Container node-exporter ready: true, restart count 0 Oct 23 01:35:57.634: INFO: liveness-dee26e75-4741-4cb4-8805-e175506ec9c2 started at 2021-10-23 01:35:29 +0000 UTC (0+1 container statuses recorded) Oct 23 01:35:57.634: INFO: Container agnhost-container ready: true, restart count 1 Oct 23 01:35:57.634: INFO: pod-subpath-test-projected-bf4r started at 2021-10-23 01:35:30 +0000 UTC (0+1 container statuses recorded) Oct 23 01:35:57.634: INFO: Container test-container-subpath-projected-bf4r ready: true, restart count 0 Oct 23 01:35:57.634: INFO: simpletest-rc-to-be-deleted-kjc9s started at 2021-10-23 01:35:51 +0000 UTC (0+1 container statuses recorded) Oct 23 01:35:57.634: INFO: Container nginx ready: false, restart count 0 Oct 23 01:35:57.634: INFO: kube-flannel-xx6ls started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 01:35:57.634: INFO: Init container install-cni ready: true, restart count 1 Oct 23 01:35:57.634: INFO: Container kube-flannel ready: true, restart count 2 Oct 23 01:35:57.634: INFO: tas-telemetry-aware-scheduling-84ff454dfb-gltgg started at 2021-10-22 21:22:32 +0000 UTC (0+1 container statuses recorded) Oct 23 01:35:57.634: INFO: Container tas-extender ready: true, restart count 0 Oct 23 01:35:57.634: INFO: simpletest-rc-to-be-deleted-2bs5t started at 2021-10-23 01:35:51 +0000 UTC (0+1 container statuses recorded) Oct 23 01:35:57.634: INFO: Container nginx ready: false, restart count 0 Oct 23 01:35:57.634: INFO: collectd-xhdgw started at 2021-10-22 21:23:20 +0000 UTC (0+3 container statuses recorded) Oct 23 01:35:57.634: INFO: Container collectd ready: true, restart count 0 Oct 23 01:35:57.634: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 01:35:57.634: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 01:35:57.634: INFO: simpletest-rc-to-be-deleted-6l9dw started at (0+0 container statuses recorded) Oct 23 01:35:57.634: INFO: nginx-proxy-node2 started at 2021-10-22 21:05:23 +0000 UTC (0+1 container statuses recorded) Oct 23 01:35:57.634: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 01:35:57.634: INFO: pod-projected-configmaps-0742c108-c908-441d-b4e8-fc61bfc86945 started at (0+0 container statuses recorded) Oct 23 01:35:57.634: INFO: simpletest-rc-to-be-deleted-cdw8c started at 2021-10-23 01:35:51 +0000 UTC (0+1 container statuses recorded) Oct 23 01:35:57.634: INFO: Container nginx ready: false, restart count 0 Oct 23 01:35:57.634: INFO: execpod5fmgl started at 2021-10-23 01:34:49 +0000 UTC (0+1 container statuses recorded) Oct 23 01:35:57.634: INFO: Container agnhost-container ready: true, restart count 0 Oct 23 01:35:57.634: INFO: cmk-kn29k started at 2021-10-22 21:18:25 +0000 UTC (0+2 container statuses recorded) Oct 23 01:35:57.634: INFO: Container nodereport ready: true, restart count 1 Oct 23 01:35:57.634: INFO: Container reconcile ready: true, restart count 0 Oct 23 01:35:57.634: INFO: pod-subpath-test-downwardapi-4d8w started at 2021-10-23 01:35:53 +0000 UTC (0+1 container statuses recorded) Oct 23 01:35:57.634: INFO: Container test-container-subpath-downwardapi-4d8w ready: false, restart count 0 W1023 01:35:57.647781 32 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 01:36:01.203: INFO: Latency metrics for node node2 Oct 23 01:36:01.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9999" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [164.453 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 01:35:32.413: Unexpected error: <*errors.errorString | 0xc003642c10>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31195 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31195 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2572 ------------------------------ {"msg":"FAILED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":8,"skipped":167,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:35:30.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-projected-bf4r STEP: Creating a pod to test atomic-volume-subpath Oct 23 01:35:30.969: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-bf4r" in namespace "subpath-9878" to be "Succeeded or Failed" Oct 23 01:35:30.972: INFO: Pod "pod-subpath-test-projected-bf4r": Phase="Pending", Reason="", readiness=false. Elapsed: 2.192781ms Oct 23 01:35:32.975: INFO: Pod "pod-subpath-test-projected-bf4r": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006051426s Oct 23 01:35:34.980: INFO: Pod "pod-subpath-test-projected-bf4r": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010858048s Oct 23 01:35:36.984: INFO: Pod "pod-subpath-test-projected-bf4r": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014985039s Oct 23 01:35:38.990: INFO: Pod "pod-subpath-test-projected-bf4r": Phase="Pending", Reason="", readiness=false. Elapsed: 8.020607581s Oct 23 01:35:40.994: INFO: Pod "pod-subpath-test-projected-bf4r": Phase="Pending", Reason="", readiness=false. Elapsed: 10.02490884s Oct 23 01:35:42.999: INFO: Pod "pod-subpath-test-projected-bf4r": Phase="Running", Reason="", readiness=true. Elapsed: 12.029228687s Oct 23 01:35:45.002: INFO: Pod "pod-subpath-test-projected-bf4r": Phase="Running", Reason="", readiness=true. Elapsed: 14.032156913s Oct 23 01:35:47.007: INFO: Pod "pod-subpath-test-projected-bf4r": Phase="Running", Reason="", readiness=true. Elapsed: 16.037563391s Oct 23 01:35:49.013: INFO: Pod "pod-subpath-test-projected-bf4r": Phase="Running", Reason="", readiness=true. Elapsed: 18.043330583s Oct 23 01:35:51.019: INFO: Pod "pod-subpath-test-projected-bf4r": Phase="Running", Reason="", readiness=true. Elapsed: 20.049193417s Oct 23 01:35:53.022: INFO: Pod "pod-subpath-test-projected-bf4r": Phase="Running", Reason="", readiness=true. Elapsed: 22.052919035s Oct 23 01:35:55.026: INFO: Pod "pod-subpath-test-projected-bf4r": Phase="Running", Reason="", readiness=true. Elapsed: 24.057037592s Oct 23 01:35:57.031: INFO: Pod "pod-subpath-test-projected-bf4r": Phase="Running", Reason="", readiness=true. Elapsed: 26.061288794s Oct 23 01:35:59.036: INFO: Pod "pod-subpath-test-projected-bf4r": Phase="Running", Reason="", readiness=true. Elapsed: 28.066448998s Oct 23 01:36:01.040: INFO: Pod "pod-subpath-test-projected-bf4r": Phase="Running", Reason="", readiness=true. Elapsed: 30.070295181s Oct 23 01:36:03.043: INFO: Pod "pod-subpath-test-projected-bf4r": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.07408256s STEP: Saw pod success Oct 23 01:36:03.043: INFO: Pod "pod-subpath-test-projected-bf4r" satisfied condition "Succeeded or Failed" Oct 23 01:36:03.046: INFO: Trying to get logs from node node2 pod pod-subpath-test-projected-bf4r container test-container-subpath-projected-bf4r: STEP: delete the pod Oct 23 01:36:03.059: INFO: Waiting for pod pod-subpath-test-projected-bf4r to disappear Oct 23 01:36:03.061: INFO: Pod pod-subpath-test-projected-bf4r no longer exists STEP: Deleting pod pod-subpath-test-projected-bf4r Oct 23 01:36:03.061: INFO: Deleting pod "pod-subpath-test-projected-bf4r" in namespace "subpath-9878" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:36:03.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9878" for this suite. • [SLOW TEST:32.139 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":-1,"completed":21,"skipped":403,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:35:57.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-map-79bc2ec2-a990-4d80-8fe4-34c47cf24a44 STEP: Creating a pod to test consume configMaps Oct 23 01:35:57.314: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0742c108-c908-441d-b4e8-fc61bfc86945" in namespace "projected-5880" to be "Succeeded or Failed" Oct 23 01:35:57.317: INFO: Pod "pod-projected-configmaps-0742c108-c908-441d-b4e8-fc61bfc86945": Phase="Pending", Reason="", readiness=false. Elapsed: 2.746557ms Oct 23 01:35:59.320: INFO: Pod "pod-projected-configmaps-0742c108-c908-441d-b4e8-fc61bfc86945": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006041877s Oct 23 01:36:01.325: INFO: Pod "pod-projected-configmaps-0742c108-c908-441d-b4e8-fc61bfc86945": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010420358s Oct 23 01:36:03.329: INFO: Pod "pod-projected-configmaps-0742c108-c908-441d-b4e8-fc61bfc86945": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014171591s Oct 23 01:36:05.332: INFO: Pod "pod-projected-configmaps-0742c108-c908-441d-b4e8-fc61bfc86945": Phase="Pending", Reason="", readiness=false. Elapsed: 8.017478973s Oct 23 01:36:07.337: INFO: Pod "pod-projected-configmaps-0742c108-c908-441d-b4e8-fc61bfc86945": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.022469832s STEP: Saw pod success Oct 23 01:36:07.337: INFO: Pod "pod-projected-configmaps-0742c108-c908-441d-b4e8-fc61bfc86945" satisfied condition "Succeeded or Failed" Oct 23 01:36:07.339: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-0742c108-c908-441d-b4e8-fc61bfc86945 container agnhost-container: STEP: delete the pod Oct 23 01:36:07.355: INFO: Waiting for pod pod-projected-configmaps-0742c108-c908-441d-b4e8-fc61bfc86945 to disappear Oct 23 01:36:07.357: INFO: Pod pod-projected-configmaps-0742c108-c908-441d-b4e8-fc61bfc86945 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:36:07.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5880" for this suite. • [SLOW TEST:10.085 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":572,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:36:01.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting the auto-created API token STEP: reading a file in the container Oct 23 01:36:07.806: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9952 pod-service-account-2f2fc4d2-27b9-4a9a-85a3-9b04790fd3d5 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Oct 23 01:36:08.024: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9952 pod-service-account-2f2fc4d2-27b9-4a9a-85a3-9b04790fd3d5 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Oct 23 01:36:08.302: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9952 pod-service-account-2f2fc4d2-27b9-4a9a-85a3-9b04790fd3d5 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:36:08.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-9952" for this suite. • [SLOW TEST:7.299 seconds] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":-1,"completed":9,"skipped":181,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:36:08.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingress STEP: Waiting for a default service account to be provisioned in namespace [It] should support creating Ingress API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Oct 23 01:36:08.617: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Oct 23 01:36:08.620: INFO: starting watch STEP: patching STEP: updating Oct 23 01:36:08.629: INFO: waiting for watch events with expected annotations Oct 23 01:36:08.629: INFO: saw patched and updated annotations STEP: patching /status STEP: updating /status STEP: get /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:36:08.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingress-1449" for this suite. • ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:35:57.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 01:35:57.859: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Oct 23 01:36:05.883: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7557 --namespace=crd-publish-openapi-7557 create -f -' Oct 23 01:36:06.366: INFO: stderr: "" Oct 23 01:36:06.366: INFO: stdout: "e2e-test-crd-publish-openapi-1296-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Oct 23 01:36:06.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7557 --namespace=crd-publish-openapi-7557 delete e2e-test-crd-publish-openapi-1296-crds test-foo' Oct 23 01:36:06.521: INFO: stderr: "" Oct 23 01:36:06.521: INFO: stdout: "e2e-test-crd-publish-openapi-1296-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Oct 23 01:36:06.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7557 --namespace=crd-publish-openapi-7557 apply -f -' Oct 23 01:36:06.833: INFO: stderr: "" Oct 23 01:36:06.833: INFO: stdout: "e2e-test-crd-publish-openapi-1296-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Oct 23 01:36:06.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7557 --namespace=crd-publish-openapi-7557 delete e2e-test-crd-publish-openapi-1296-crds test-foo' Oct 23 01:36:07.003: INFO: stderr: "" Oct 23 01:36:07.003: INFO: stdout: "e2e-test-crd-publish-openapi-1296-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Oct 23 01:36:07.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7557 --namespace=crd-publish-openapi-7557 create -f -' Oct 23 01:36:07.306: INFO: rc: 1 Oct 23 01:36:07.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7557 --namespace=crd-publish-openapi-7557 apply -f -' Oct 23 01:36:07.598: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Oct 23 01:36:07.598: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7557 --namespace=crd-publish-openapi-7557 create -f -' Oct 23 01:36:07.888: INFO: rc: 1 Oct 23 01:36:07.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7557 --namespace=crd-publish-openapi-7557 apply -f -' Oct 23 01:36:08.161: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Oct 23 01:36:08.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7557 explain e2e-test-crd-publish-openapi-1296-crds' Oct 23 01:36:08.492: INFO: stderr: "" Oct 23 01:36:08.492: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1296-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Oct 23 01:36:08.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7557 explain e2e-test-crd-publish-openapi-1296-crds.metadata' Oct 23 01:36:08.832: INFO: stderr: "" Oct 23 01:36:08.832: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1296-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n NOT return a 409 - instead, it will either return 201 Created or 500 with\n Reason ServerTimeout indicating a unique name could not be found in the\n time allotted, and the client should retry (optionally after the time\n indicated in the Retry-After header).\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only.\n\n DEPRECATED Kubernetes will stop propagating this field in 1.20 release and\n the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Oct 23 01:36:08.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7557 explain e2e-test-crd-publish-openapi-1296-crds.spec' Oct 23 01:36:09.169: INFO: stderr: "" Oct 23 01:36:09.169: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1296-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Oct 23 01:36:09.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7557 explain e2e-test-crd-publish-openapi-1296-crds.spec.bars' Oct 23 01:36:09.482: INFO: stderr: "" Oct 23 01:36:09.482: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1296-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Oct 23 01:36:09.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7557 explain e2e-test-crd-publish-openapi-1296-crds.spec.bars2' Oct 23 01:36:09.795: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:36:12.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7557" for this suite. • [SLOW TEST:15.003 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":-1,"completed":17,"skipped":271,"failed":0} S ------------------------------ [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:36:12.842: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod with the kernel.shm_rmid_forced sysctl STEP: Watching for error events or started pod STEP: Waiting for pod completion STEP: Checking that the pod succeeded STEP: Getting logs from the pod STEP: Checking that the sysctl is actually updated [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:36:16.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-5078" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":18,"skipped":272,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:36:07.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:36:18.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8710" for this suite. • [SLOW TEST:11.067 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":-1,"completed":32,"skipped":594,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:36:18.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be immutable if `immutable` field is set [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:36:18.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5639" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":33,"skipped":606,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:35:53.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-downwardapi-4d8w STEP: Creating a pod to test atomic-volume-subpath Oct 23 01:35:53.255: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-4d8w" in namespace "subpath-4226" to be "Succeeded or Failed" Oct 23 01:35:53.257: INFO: Pod "pod-subpath-test-downwardapi-4d8w": Phase="Pending", Reason="", readiness=false. Elapsed: 2.284658ms Oct 23 01:35:55.262: INFO: Pod "pod-subpath-test-downwardapi-4d8w": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007128404s Oct 23 01:35:57.265: INFO: Pod "pod-subpath-test-downwardapi-4d8w": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01083243s Oct 23 01:35:59.270: INFO: Pod "pod-subpath-test-downwardapi-4d8w": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015907818s Oct 23 01:36:01.275: INFO: Pod "pod-subpath-test-downwardapi-4d8w": Phase="Pending", Reason="", readiness=false. Elapsed: 8.020036503s Oct 23 01:36:03.279: INFO: Pod "pod-subpath-test-downwardapi-4d8w": Phase="Pending", Reason="", readiness=false. Elapsed: 10.024889491s Oct 23 01:36:05.283: INFO: Pod "pod-subpath-test-downwardapi-4d8w": Phase="Running", Reason="", readiness=true. Elapsed: 12.028696888s Oct 23 01:36:07.288: INFO: Pod "pod-subpath-test-downwardapi-4d8w": Phase="Running", Reason="", readiness=true. Elapsed: 14.033888915s Oct 23 01:36:09.293: INFO: Pod "pod-subpath-test-downwardapi-4d8w": Phase="Running", Reason="", readiness=true. Elapsed: 16.038114587s Oct 23 01:36:11.297: INFO: Pod "pod-subpath-test-downwardapi-4d8w": Phase="Running", Reason="", readiness=true. Elapsed: 18.04223392s Oct 23 01:36:13.301: INFO: Pod "pod-subpath-test-downwardapi-4d8w": Phase="Running", Reason="", readiness=true. Elapsed: 20.046528612s Oct 23 01:36:15.305: INFO: Pod "pod-subpath-test-downwardapi-4d8w": Phase="Running", Reason="", readiness=true. Elapsed: 22.050704482s Oct 23 01:36:17.309: INFO: Pod "pod-subpath-test-downwardapi-4d8w": Phase="Running", Reason="", readiness=true. Elapsed: 24.05453898s Oct 23 01:36:19.312: INFO: Pod "pod-subpath-test-downwardapi-4d8w": Phase="Running", Reason="", readiness=true. Elapsed: 26.05717328s Oct 23 01:36:21.317: INFO: Pod "pod-subpath-test-downwardapi-4d8w": Phase="Running", Reason="", readiness=true. Elapsed: 28.062528686s Oct 23 01:36:23.321: INFO: Pod "pod-subpath-test-downwardapi-4d8w": Phase="Running", Reason="", readiness=true. Elapsed: 30.066178162s Oct 23 01:36:25.324: INFO: Pod "pod-subpath-test-downwardapi-4d8w": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.069815415s STEP: Saw pod success Oct 23 01:36:25.324: INFO: Pod "pod-subpath-test-downwardapi-4d8w" satisfied condition "Succeeded or Failed" Oct 23 01:36:25.327: INFO: Trying to get logs from node node2 pod pod-subpath-test-downwardapi-4d8w container test-container-subpath-downwardapi-4d8w: STEP: delete the pod Oct 23 01:36:25.344: INFO: Waiting for pod pod-subpath-test-downwardapi-4d8w to disappear Oct 23 01:36:25.346: INFO: Pod pod-subpath-test-downwardapi-4d8w no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-4d8w Oct 23 01:36:25.346: INFO: Deleting pod "pod-subpath-test-downwardapi-4d8w" in namespace "subpath-4226" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:36:25.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4226" for this suite. • [SLOW TEST:32.142 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":-1,"completed":10,"skipped":160,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:36:25.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should delete a collection of pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create set of pods Oct 23 01:36:25.397: INFO: created test-pod-1 Oct 23 01:36:25.406: INFO: created test-pod-2 Oct 23 01:36:25.415: INFO: created test-pod-3 STEP: waiting for all 3 pods to be located STEP: waiting for all pods to be deleted [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:36:25.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4903" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Pods should delete a collection of pods [Conformance]","total":-1,"completed":11,"skipped":162,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSS ------------------------------ [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:35:35.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-5152, will wait for the garbage collector to delete the pods Oct 23 01:35:51.869: INFO: Deleting Job.batch foo took: 4.153939ms Oct 23 01:35:51.969: INFO: Terminating Job.batch foo pods took: 100.292516ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:36:26.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-5152" for this suite. • [SLOW TEST:50.704 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":-1,"completed":12,"skipped":178,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:36:25.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0644 on tmpfs Oct 23 01:36:25.491: INFO: Waiting up to 5m0s for pod "pod-964e90ea-3730-4320-aa83-b22bc042123b" in namespace "emptydir-361" to be "Succeeded or Failed" Oct 23 01:36:25.493: INFO: Pod "pod-964e90ea-3730-4320-aa83-b22bc042123b": Phase="Pending", Reason="", readiness=false. Elapsed: 1.955394ms Oct 23 01:36:27.496: INFO: Pod "pod-964e90ea-3730-4320-aa83-b22bc042123b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004821602s Oct 23 01:36:29.500: INFO: Pod "pod-964e90ea-3730-4320-aa83-b22bc042123b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009040163s Oct 23 01:36:31.505: INFO: Pod "pod-964e90ea-3730-4320-aa83-b22bc042123b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013632811s STEP: Saw pod success Oct 23 01:36:31.505: INFO: Pod "pod-964e90ea-3730-4320-aa83-b22bc042123b" satisfied condition "Succeeded or Failed" Oct 23 01:36:31.507: INFO: Trying to get logs from node node2 pod pod-964e90ea-3730-4320-aa83-b22bc042123b container test-container: STEP: delete the pod Oct 23 01:36:31.521: INFO: Waiting for pod pod-964e90ea-3730-4320-aa83-b22bc042123b to disappear Oct 23 01:36:31.522: INFO: Pod pod-964e90ea-3730-4320-aa83-b22bc042123b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:36:31.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-361" for this suite. • [SLOW TEST:6.071 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:36:26.505: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Oct 23 01:36:26.542: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3ca1adf8-a00b-48a9-a236-39e6e4504f9b" in namespace "projected-120" to be "Succeeded or Failed" Oct 23 01:36:26.547: INFO: Pod "downwardapi-volume-3ca1adf8-a00b-48a9-a236-39e6e4504f9b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.222211ms Oct 23 01:36:28.551: INFO: Pod "downwardapi-volume-3ca1adf8-a00b-48a9-a236-39e6e4504f9b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0080312s Oct 23 01:36:30.555: INFO: Pod "downwardapi-volume-3ca1adf8-a00b-48a9-a236-39e6e4504f9b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012341955s Oct 23 01:36:32.559: INFO: Pod "downwardapi-volume-3ca1adf8-a00b-48a9-a236-39e6e4504f9b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016751077s STEP: Saw pod success Oct 23 01:36:32.559: INFO: Pod "downwardapi-volume-3ca1adf8-a00b-48a9-a236-39e6e4504f9b" satisfied condition "Succeeded or Failed" Oct 23 01:36:32.562: INFO: Trying to get logs from node node2 pod downwardapi-volume-3ca1adf8-a00b-48a9-a236-39e6e4504f9b container client-container: STEP: delete the pod Oct 23 01:36:32.579: INFO: Waiting for pod downwardapi-volume-3ca1adf8-a00b-48a9-a236-39e6e4504f9b to disappear Oct 23 01:36:32.580: INFO: Pod downwardapi-volume-3ca1adf8-a00b-48a9-a236-39e6e4504f9b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:36:32.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-120" for this suite. • [SLOW TEST:6.084 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":189,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":169,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:36:31.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-map-c803eb20-3558-4b62-bffe-9791b11c0bc1 STEP: Creating a pod to test consume secrets Oct 23 01:36:31.569: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3e785bca-7d23-4f34-84a9-de27ad3ee733" in namespace "projected-3598" to be "Succeeded or Failed" Oct 23 01:36:31.571: INFO: Pod "pod-projected-secrets-3e785bca-7d23-4f34-84a9-de27ad3ee733": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101998ms Oct 23 01:36:33.575: INFO: Pod "pod-projected-secrets-3e785bca-7d23-4f34-84a9-de27ad3ee733": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005937613s Oct 23 01:36:35.582: INFO: Pod "pod-projected-secrets-3e785bca-7d23-4f34-84a9-de27ad3ee733": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0125591s STEP: Saw pod success Oct 23 01:36:35.582: INFO: Pod "pod-projected-secrets-3e785bca-7d23-4f34-84a9-de27ad3ee733" satisfied condition "Succeeded or Failed" Oct 23 01:36:35.585: INFO: Trying to get logs from node node2 pod pod-projected-secrets-3e785bca-7d23-4f34-84a9-de27ad3ee733 container projected-secret-volume-test: STEP: delete the pod Oct 23 01:36:35.601: INFO: Waiting for pod pod-projected-secrets-3e785bca-7d23-4f34-84a9-de27ad3ee733 to disappear Oct 23 01:36:35.603: INFO: Pod pod-projected-secrets-3e785bca-7d23-4f34-84a9-de27ad3ee733 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:36:35.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3598" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":169,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:36:03.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:36:36.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1640" for this suite. • [SLOW TEST:33.254 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when starting a container that exits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":410,"failed":0} SSS ------------------------------ {"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":-1,"completed":10,"skipped":192,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:36:08.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: referencing a single matching pod STEP: referencing matching pods with named port STEP: creating empty Endpoints and EndpointSlices for no matching Pods STEP: recreating EndpointSlices after they've been deleted Oct 23 01:36:28.762: INFO: EndpointSlice for Service endpointslice-7943/example-named-port not found [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:36:38.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-7943" for this suite. • [SLOW TEST:30.115 seconds] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","total":-1,"completed":11,"skipped":192,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:35:30.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: Gathering metrics W1023 01:35:36.862064 35 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 01:36:38.880: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:36:38.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5957" for this suite. • [SLOW TEST:68.092 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":-1,"completed":23,"skipped":515,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]} SSSSSSSS ------------------------------ [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:36:35.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test override command Oct 23 01:36:35.673: INFO: Waiting up to 5m0s for pod "client-containers-25dddea8-e388-40c2-ba15-0c31d66f5d96" in namespace "containers-5912" to be "Succeeded or Failed" Oct 23 01:36:35.675: INFO: Pod "client-containers-25dddea8-e388-40c2-ba15-0c31d66f5d96": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054407ms Oct 23 01:36:37.678: INFO: Pod "client-containers-25dddea8-e388-40c2-ba15-0c31d66f5d96": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005713215s Oct 23 01:36:39.683: INFO: Pod "client-containers-25dddea8-e388-40c2-ba15-0c31d66f5d96": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010016999s STEP: Saw pod success Oct 23 01:36:39.683: INFO: Pod "client-containers-25dddea8-e388-40c2-ba15-0c31d66f5d96" satisfied condition "Succeeded or Failed" Oct 23 01:36:39.685: INFO: Trying to get logs from node node2 pod client-containers-25dddea8-e388-40c2-ba15-0c31d66f5d96 container agnhost-container: STEP: delete the pod Oct 23 01:36:39.700: INFO: Waiting for pod client-containers-25dddea8-e388-40c2-ba15-0c31d66f5d96 to disappear Oct 23 01:36:39.702: INFO: Pod client-containers-25dddea8-e388-40c2-ba15-0c31d66f5d96 no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:36:39.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5912" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":182,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:36:39.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should provide secure master service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:36:39.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6422" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • ------------------------------ {"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":-1,"completed":15,"skipped":193,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} S ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:36:32.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating Pod STEP: Reading file content from the nginx-container Oct 23 01:36:40.708: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-3036 PodName:pod-sharedvolume-3e1b98cd-a076-4fa3-a558-ad901b5020aa ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 01:36:40.708: INFO: >>> kubeConfig: /root/.kube/config Oct 23 01:36:40.921: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:36:40.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3036" for this suite. • [SLOW TEST:8.303 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 pod should support shared volumes between containers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":-1,"completed":14,"skipped":208,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:36:36.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-089df572-7ffa-460d-b012-78b5e2b0642b STEP: Creating a pod to test consume secrets Oct 23 01:36:36.387: INFO: Waiting up to 5m0s for pod "pod-secrets-fa3a34bc-6d8c-46cf-893b-2bc9c4289576" in namespace "secrets-9195" to be "Succeeded or Failed" Oct 23 01:36:36.389: INFO: Pod "pod-secrets-fa3a34bc-6d8c-46cf-893b-2bc9c4289576": Phase="Pending", Reason="", readiness=false. Elapsed: 1.996285ms Oct 23 01:36:38.393: INFO: Pod "pod-secrets-fa3a34bc-6d8c-46cf-893b-2bc9c4289576": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005252027s Oct 23 01:36:40.396: INFO: Pod "pod-secrets-fa3a34bc-6d8c-46cf-893b-2bc9c4289576": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009193988s Oct 23 01:36:42.400: INFO: Pod "pod-secrets-fa3a34bc-6d8c-46cf-893b-2bc9c4289576": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013153148s STEP: Saw pod success Oct 23 01:36:42.401: INFO: Pod "pod-secrets-fa3a34bc-6d8c-46cf-893b-2bc9c4289576" satisfied condition "Succeeded or Failed" Oct 23 01:36:42.404: INFO: Trying to get logs from node node2 pod pod-secrets-fa3a34bc-6d8c-46cf-893b-2bc9c4289576 container secret-volume-test: STEP: delete the pod Oct 23 01:36:42.420: INFO: Waiting for pod pod-secrets-fa3a34bc-6d8c-46cf-893b-2bc9c4289576 to disappear Oct 23 01:36:42.425: INFO: Pod pod-secrets-fa3a34bc-6d8c-46cf-893b-2bc9c4289576 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:36:42.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9195" for this suite. • [SLOW TEST:6.085 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":413,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:36:39.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should adopt matching pods on creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Given a Pod with a 'name' label pod-adoption is created Oct 23 01:36:39.815: INFO: The status of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:36:41.817: INFO: The status of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:36:43.818: INFO: The status of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:36:45.819: INFO: The status of Pod pod-adoption is Running (Ready = true) STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:36:46.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2821" for this suite. • [SLOW TEST:7.068 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":-1,"completed":16,"skipped":194,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:36:38.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-263df1df-b44b-4009-b8d1-66620c353ea2 STEP: Creating a pod to test consume secrets Oct 23 01:36:38.946: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5e5a0142-bcb9-4ce9-8680-cdee74413c62" in namespace "projected-83" to be "Succeeded or Failed" Oct 23 01:36:38.949: INFO: Pod "pod-projected-secrets-5e5a0142-bcb9-4ce9-8680-cdee74413c62": Phase="Pending", Reason="", readiness=false. Elapsed: 2.628299ms Oct 23 01:36:40.951: INFO: Pod "pod-projected-secrets-5e5a0142-bcb9-4ce9-8680-cdee74413c62": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005144582s Oct 23 01:36:42.954: INFO: Pod "pod-projected-secrets-5e5a0142-bcb9-4ce9-8680-cdee74413c62": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007906406s Oct 23 01:36:44.957: INFO: Pod "pod-projected-secrets-5e5a0142-bcb9-4ce9-8680-cdee74413c62": Phase="Pending", Reason="", readiness=false. Elapsed: 6.011319238s Oct 23 01:36:46.961: INFO: Pod "pod-projected-secrets-5e5a0142-bcb9-4ce9-8680-cdee74413c62": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.015109917s STEP: Saw pod success Oct 23 01:36:46.961: INFO: Pod "pod-projected-secrets-5e5a0142-bcb9-4ce9-8680-cdee74413c62" satisfied condition "Succeeded or Failed" Oct 23 01:36:46.964: INFO: Trying to get logs from node node2 pod pod-projected-secrets-5e5a0142-bcb9-4ce9-8680-cdee74413c62 container projected-secret-volume-test: STEP: delete the pod Oct 23 01:36:46.975: INFO: Waiting for pod pod-projected-secrets-5e5a0142-bcb9-4ce9-8680-cdee74413c62 to disappear Oct 23 01:36:46.977: INFO: Pod pod-projected-secrets-5e5a0142-bcb9-4ce9-8680-cdee74413c62 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:36:46.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-83" for this suite. • [SLOW TEST:8.074 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":523,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:36:40.957: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-fbe25253-6050-4cce-8729-cd061706837e STEP: Creating a pod to test consume configMaps Oct 23 01:36:40.995: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-91c0f125-e2a3-4a63-8c1d-8f5a92a4acae" in namespace "projected-6836" to be "Succeeded or Failed" Oct 23 01:36:40.999: INFO: Pod "pod-projected-configmaps-91c0f125-e2a3-4a63-8c1d-8f5a92a4acae": Phase="Pending", Reason="", readiness=false. Elapsed: 4.40188ms Oct 23 01:36:43.003: INFO: Pod "pod-projected-configmaps-91c0f125-e2a3-4a63-8c1d-8f5a92a4acae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008014304s Oct 23 01:36:45.005: INFO: Pod "pod-projected-configmaps-91c0f125-e2a3-4a63-8c1d-8f5a92a4acae": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010484415s Oct 23 01:36:47.009: INFO: Pod "pod-projected-configmaps-91c0f125-e2a3-4a63-8c1d-8f5a92a4acae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014441683s STEP: Saw pod success Oct 23 01:36:47.009: INFO: Pod "pod-projected-configmaps-91c0f125-e2a3-4a63-8c1d-8f5a92a4acae" satisfied condition "Succeeded or Failed" Oct 23 01:36:47.011: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-91c0f125-e2a3-4a63-8c1d-8f5a92a4acae container agnhost-container: STEP: delete the pod Oct 23 01:36:47.032: INFO: Waiting for pod pod-projected-configmaps-91c0f125-e2a3-4a63-8c1d-8f5a92a4acae to disappear Oct 23 01:36:47.034: INFO: Pod pod-projected-configmaps-91c0f125-e2a3-4a63-8c1d-8f5a92a4acae no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:36:47.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6836" for this suite. • [SLOW TEST:6.085 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":222,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:36:38.801: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating the pod Oct 23 01:36:38.840: INFO: The status of Pod annotationupdate47dda030-6998-4f50-a1af-6036c0c3ec37 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:36:40.842: INFO: The status of Pod annotationupdate47dda030-6998-4f50-a1af-6036c0c3ec37 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:36:42.843: INFO: The status of Pod annotationupdate47dda030-6998-4f50-a1af-6036c0c3ec37 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:36:44.843: INFO: The status of Pod annotationupdate47dda030-6998-4f50-a1af-6036c0c3ec37 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:36:46.843: INFO: The status of Pod annotationupdate47dda030-6998-4f50-a1af-6036c0c3ec37 is Running (Ready = true) Oct 23 01:36:47.363: INFO: Successfully updated pod "annotationupdate47dda030-6998-4f50-a1af-6036c0c3ec37" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:36:50.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4017" for this suite. • [SLOW TEST:11.207 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":203,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:36:47.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Oct 23 01:36:47.067: INFO: Waiting up to 5m0s for pod "downwardapi-volume-243b7589-5e50-44d8-a5bc-da5b9aab4a59" in namespace "projected-1948" to be "Succeeded or Failed" Oct 23 01:36:47.070: INFO: Pod "downwardapi-volume-243b7589-5e50-44d8-a5bc-da5b9aab4a59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.499243ms Oct 23 01:36:49.074: INFO: Pod "downwardapi-volume-243b7589-5e50-44d8-a5bc-da5b9aab4a59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00712007s Oct 23 01:36:51.079: INFO: Pod "downwardapi-volume-243b7589-5e50-44d8-a5bc-da5b9aab4a59": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011643948s Oct 23 01:36:53.083: INFO: Pod "downwardapi-volume-243b7589-5e50-44d8-a5bc-da5b9aab4a59": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015376706s Oct 23 01:36:55.086: INFO: Pod "downwardapi-volume-243b7589-5e50-44d8-a5bc-da5b9aab4a59": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.018349785s STEP: Saw pod success Oct 23 01:36:55.086: INFO: Pod "downwardapi-volume-243b7589-5e50-44d8-a5bc-da5b9aab4a59" satisfied condition "Succeeded or Failed" Oct 23 01:36:55.088: INFO: Trying to get logs from node node2 pod downwardapi-volume-243b7589-5e50-44d8-a5bc-da5b9aab4a59 container client-container: STEP: delete the pod Oct 23 01:36:55.195: INFO: Waiting for pod downwardapi-volume-243b7589-5e50-44d8-a5bc-da5b9aab4a59 to disappear Oct 23 01:36:55.197: INFO: Pod downwardapi-volume-243b7589-5e50-44d8-a5bc-da5b9aab4a59 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:36:55.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1948" for this suite. • [SLOW TEST:8.170 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":551,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]} SSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:36:42.467: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 23 01:36:42.867: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 23 01:36:44.876: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549802, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549802, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549802, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549802, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 23 01:36:47.888: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 01:36:47.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4558-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:36:55.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1105" for this suite. STEP: Destroying namespace "webhook-1105-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:13.539 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":-1,"completed":24,"skipped":431,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:36:46.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Oct 23 01:36:46.932: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3aed5cfb-3622-44f1-ac4e-48a51feebe4d" in namespace "downward-api-1092" to be "Succeeded or Failed" Oct 23 01:36:46.934: INFO: Pod "downwardapi-volume-3aed5cfb-3622-44f1-ac4e-48a51feebe4d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.183747ms Oct 23 01:36:48.938: INFO: Pod "downwardapi-volume-3aed5cfb-3622-44f1-ac4e-48a51feebe4d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00638304s Oct 23 01:36:50.943: INFO: Pod "downwardapi-volume-3aed5cfb-3622-44f1-ac4e-48a51feebe4d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01103616s Oct 23 01:36:52.945: INFO: Pod "downwardapi-volume-3aed5cfb-3622-44f1-ac4e-48a51feebe4d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013805549s Oct 23 01:36:54.949: INFO: Pod "downwardapi-volume-3aed5cfb-3622-44f1-ac4e-48a51feebe4d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.017551648s Oct 23 01:36:56.954: INFO: Pod "downwardapi-volume-3aed5cfb-3622-44f1-ac4e-48a51feebe4d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.02260339s STEP: Saw pod success Oct 23 01:36:56.954: INFO: Pod "downwardapi-volume-3aed5cfb-3622-44f1-ac4e-48a51feebe4d" satisfied condition "Succeeded or Failed" Oct 23 01:36:56.956: INFO: Trying to get logs from node node2 pod downwardapi-volume-3aed5cfb-3622-44f1-ac4e-48a51feebe4d container client-container: STEP: delete the pod Oct 23 01:36:56.968: INFO: Waiting for pod downwardapi-volume-3aed5cfb-3622-44f1-ac4e-48a51feebe4d to disappear Oct 23 01:36:56.970: INFO: Pod downwardapi-volume-3aed5cfb-3622-44f1-ac4e-48a51feebe4d no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:36:56.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1092" for this suite. • [SLOW TEST:10.080 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":227,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} S ------------------------------ [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:36:56.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create set of events STEP: get a list of Events with a label in the current namespace STEP: delete a list of events Oct 23 01:36:57.015: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:36:57.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-7339" for this suite. • ------------------------------ {"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":-1,"completed":18,"skipped":228,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:36:50.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating the pod Oct 23 01:36:50.052: INFO: The status of Pod annotationupdate96476825-d2df-4861-9131-2f4ec8f36eec is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:36:52.056: INFO: The status of Pod annotationupdate96476825-d2df-4861-9131-2f4ec8f36eec is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:36:54.055: INFO: The status of Pod annotationupdate96476825-d2df-4861-9131-2f4ec8f36eec is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:36:56.055: INFO: The status of Pod annotationupdate96476825-d2df-4861-9131-2f4ec8f36eec is Running (Ready = true) Oct 23 01:36:56.570: INFO: Successfully updated pod "annotationupdate96476825-d2df-4861-9131-2f4ec8f36eec" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:36:58.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9689" for this suite. • [SLOW TEST:8.814 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":205,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:36:58.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap that has name configmap-test-emptyKey-fdeb8d13-7d95-4207-85e2-68784055907d [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:36:58.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7401" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":-1,"completed":14,"skipped":208,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:36:55.224: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Oct 23 01:36:55.267: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c18dbc0c-0ea6-4b90-bc16-4fe37a09af82" in namespace "downward-api-9883" to be "Succeeded or Failed" Oct 23 01:36:55.269: INFO: Pod "downwardapi-volume-c18dbc0c-0ea6-4b90-bc16-4fe37a09af82": Phase="Pending", Reason="", readiness=false. Elapsed: 1.909836ms Oct 23 01:36:57.271: INFO: Pod "downwardapi-volume-c18dbc0c-0ea6-4b90-bc16-4fe37a09af82": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004396992s Oct 23 01:36:59.274: INFO: Pod "downwardapi-volume-c18dbc0c-0ea6-4b90-bc16-4fe37a09af82": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007517831s STEP: Saw pod success Oct 23 01:36:59.274: INFO: Pod "downwardapi-volume-c18dbc0c-0ea6-4b90-bc16-4fe37a09af82" satisfied condition "Succeeded or Failed" Oct 23 01:36:59.277: INFO: Trying to get logs from node node2 pod downwardapi-volume-c18dbc0c-0ea6-4b90-bc16-4fe37a09af82 container client-container: STEP: delete the pod Oct 23 01:36:59.289: INFO: Waiting for pod downwardapi-volume-c18dbc0c-0ea6-4b90-bc16-4fe37a09af82 to disappear Oct 23 01:36:59.291: INFO: Pod downwardapi-volume-c18dbc0c-0ea6-4b90-bc16-4fe37a09af82 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:36:59.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9883" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":560,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:34:34.820: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service nodeport-test with type=NodePort in namespace services-2514 STEP: creating replication controller nodeport-test in namespace services-2514 I1023 01:34:34.853349 29 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-2514, replica count: 2 I1023 01:34:37.905225 29 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1023 01:34:40.905794 29 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1023 01:34:43.906355 29 runners.go:190] nodeport-test Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1023 01:34:46.908654 29 runners.go:190] nodeport-test Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1023 01:34:49.909810 29 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 23 01:34:49.909: INFO: Creating new exec pod Oct 23 01:34:54.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' Oct 23 01:34:55.263: INFO: stderr: "+ nc -v -t -w 2 nodeport-test 80\n+ echo hostName\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n" Oct 23 01:34:55.263: INFO: stdout: "" Oct 23 01:34:56.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' Oct 23 01:34:56.708: INFO: stderr: "+ nc -v -t -w 2 nodeport-test 80\n+ echo hostName\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n" Oct 23 01:34:56.708: INFO: stdout: "" Oct 23 01:34:57.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' Oct 23 01:34:57.719: INFO: stderr: "+ nc -v -t -w 2 nodeport-test 80\n+ echo hostName\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n" Oct 23 01:34:57.719: INFO: stdout: "nodeport-test-ml4fp" Oct 23 01:34:57.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.9.28 80' Oct 23 01:34:57.962: INFO: stderr: "+ nc -v -t -w 2 10.233.9.28 80\n+ echo hostName\nConnection to 10.233.9.28 80 port [tcp/http] succeeded!\n" Oct 23 01:34:57.963: INFO: stdout: "nodeport-test-ml4fp" Oct 23 01:34:57.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:34:58.242: INFO: rc: 1 Oct 23 01:34:58.242: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:34:59.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:34:59.470: INFO: rc: 1 Oct 23 01:34:59.470: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:00.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:35:00.482: INFO: rc: 1 Oct 23 01:35:00.483: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:01.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:35:01.525: INFO: rc: 1 Oct 23 01:35:01.525: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:02.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:35:02.487: INFO: rc: 1 Oct 23 01:35:02.487: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:03.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:35:03.495: INFO: rc: 1 Oct 23 01:35:03.495: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:04.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:35:04.494: INFO: rc: 1 Oct 23 01:35:04.494: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:05.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:35:05.569: INFO: rc: 1 Oct 23 01:35:05.569: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:06.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:35:06.826: INFO: rc: 1 Oct 23 01:35:06.826: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:07.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:35:07.677: INFO: rc: 1 Oct 23 01:35:07.677: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:08.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:35:08.699: INFO: rc: 1 Oct 23 01:35:08.699: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:09.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:35:09.628: INFO: rc: 1 Oct 23 01:35:09.628: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:10.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:35:10.478: INFO: rc: 1 Oct 23 01:35:10.478: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:11.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:35:11.551: INFO: rc: 1 Oct 23 01:35:11.551: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:12.245: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:35:12.919: INFO: rc: 1 Oct 23 01:35:12.919: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:13.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:35:13.504: INFO: rc: 1 Oct 23 01:35:13.504: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:14.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:35:14.485: INFO: rc: 1 Oct 23 01:35:14.485: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:15.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:35:15.517: INFO: rc: 1 Oct 23 01:35:15.517: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:16.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:35:16.615: INFO: rc: 1 Oct 23 01:35:16.615: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:17.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:35:17.576: INFO: rc: 1 Oct 23 01:35:17.576: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:18.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:35:18.481: INFO: rc: 1 Oct 23 01:35:18.481: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:19.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:35:19.871: INFO: rc: 1 Oct 23 01:35:19.871: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:20.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:35:20.609: INFO: rc: 1 Oct 23 01:35:20.609: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:21.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:35:21.700: INFO: rc: 1 Oct 23 01:35:21.700: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:22.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:35:22.485: INFO: rc: 1 Oct 23 01:35:22.485: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:23.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:35:23.526: INFO: rc: 1 Oct 23 01:35:23.527: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:24.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:35:24.712: INFO: rc: 1 Oct 23 01:35:24.712: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:25.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:35:25.569: INFO: rc: 1 Oct 23 01:35:25.569: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:26.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:35:26.729: INFO: rc: 1 Oct 23 01:35:26.729: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:27.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:35:27.642: INFO: rc: 1 Oct 23 01:35:27.642: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:28.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:35:28.566: INFO: rc: 1 Oct 23 01:35:28.566: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:29.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:35:29.515: INFO: rc: 1 Oct 23 01:35:29.515: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:30.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:35:30.685: INFO: rc: 1 Oct 23 01:35:30.685: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:31.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:35:31.831: INFO: rc: 1 Oct 23 01:35:31.831: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:32.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:35:32.534: INFO: rc: 1 Oct 23 01:35:32.534: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:33.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:35:33.874: INFO: rc: 1 Oct 23 01:35:33.874: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:34.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:35:34.504: INFO: rc: 1 Oct 23 01:35:34.504: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:35.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:35:35.696: INFO: rc: 1 Oct 23 01:35:35.696: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:36.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:35:37.008: INFO: rc: 1 Oct 23 01:35:37.008: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:37.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:35:37.694: INFO: rc: 1 Oct 23 01:35:37.694: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:38.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:35:38.706: INFO: rc: 1 Oct 23 01:35:38.706: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:39.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:35:39.679: INFO: rc: 1 Oct 23 01:35:39.679: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:40.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:35:41.012: INFO: rc: 1 Oct 23 01:35:41.012: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:41.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:35:41.655: INFO: rc: 1 Oct 23 01:35:41.655: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:42.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:35:42.780: INFO: rc: 1 Oct 23 01:35:42.780: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:43.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:35:43.793: INFO: rc: 1 Oct 23 01:35:43.793: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:44.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:35:44.516: INFO: rc: 1 Oct 23 01:35:44.516: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:45.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:35:45.498: INFO: rc: 1 Oct 23 01:35:45.498: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:46.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:35:47.084: INFO: rc: 1 Oct 23 01:35:47.084: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:47.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:35:47.761: INFO: rc: 1 Oct 23 01:35:47.761: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:48.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:35:48.553: INFO: rc: 1 Oct 23 01:35:48.553: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:49.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:35:50.337: INFO: rc: 1 Oct 23 01:35:50.337: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:51.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:35:51.481: INFO: rc: 1 Oct 23 01:35:51.481: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:52.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:35:52.695: INFO: rc: 1 Oct 23 01:35:52.695: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:53.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:35:54.246: INFO: rc: 1 Oct 23 01:35:54.246: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:55.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:35:55.798: INFO: rc: 1 Oct 23 01:35:55.798: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:56.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:35:57.280: INFO: rc: 1 Oct 23 01:35:57.280: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:58.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:35:58.706: INFO: rc: 1 Oct 23 01:35:58.706: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:35:59.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:35:59.728: INFO: rc: 1 Oct 23 01:35:59.728: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:00.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:36:01.157: INFO: rc: 1 Oct 23 01:36:01.157: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:01.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:36:01.695: INFO: rc: 1 Oct 23 01:36:01.695: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:02.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:36:02.784: INFO: rc: 1 Oct 23 01:36:02.784: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:03.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:36:03.950: INFO: rc: 1 Oct 23 01:36:03.950: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:04.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:36:04.536: INFO: rc: 1 Oct 23 01:36:04.536: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:05.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:36:05.516: INFO: rc: 1 Oct 23 01:36:05.516: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:06.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:36:06.488: INFO: rc: 1 Oct 23 01:36:06.489: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:07.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:36:07.464: INFO: rc: 1 Oct 23 01:36:07.464: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:08.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:36:08.489: INFO: rc: 1 Oct 23 01:36:08.489: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:09.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:36:09.457: INFO: rc: 1 Oct 23 01:36:09.457: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:10.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:36:10.493: INFO: rc: 1 Oct 23 01:36:10.493: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:11.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:36:11.467: INFO: rc: 1 Oct 23 01:36:11.467: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:12.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:36:12.489: INFO: rc: 1 Oct 23 01:36:12.489: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:13.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:36:13.688: INFO: rc: 1 Oct 23 01:36:13.688: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:14.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:36:14.497: INFO: rc: 1 Oct 23 01:36:14.497: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:15.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:36:15.503: INFO: rc: 1 Oct 23 01:36:15.503: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:16.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:36:16.576: INFO: rc: 1 Oct 23 01:36:16.576: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:17.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:36:17.538: INFO: rc: 1 Oct 23 01:36:17.538: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:18.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:36:18.498: INFO: rc: 1 Oct 23 01:36:18.498: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:19.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:36:19.788: INFO: rc: 1 Oct 23 01:36:19.788: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:20.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:36:20.490: INFO: rc: 1 Oct 23 01:36:20.490: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:21.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:36:21.527: INFO: rc: 1 Oct 23 01:36:21.527: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:22.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:36:23.118: INFO: rc: 1 Oct 23 01:36:23.118: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:23.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:36:23.486: INFO: rc: 1 Oct 23 01:36:23.486: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:24.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:36:24.522: INFO: rc: 1 Oct 23 01:36:24.522: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:25.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:36:25.528: INFO: rc: 1 Oct 23 01:36:25.528: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:26.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:36:26.534: INFO: rc: 1 Oct 23 01:36:26.534: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:27.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:36:27.625: INFO: rc: 1 Oct 23 01:36:27.625: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:28.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:36:29.127: INFO: rc: 1 Oct 23 01:36:29.127: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:29.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:36:30.104: INFO: rc: 1 Oct 23 01:36:30.104: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:30.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:36:31.059: INFO: rc: 1 Oct 23 01:36:31.059: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:31.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:36:31.622: INFO: rc: 1 Oct 23 01:36:31.622: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:32.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:36:32.528: INFO: rc: 1 Oct 23 01:36:32.528: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:33.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:36:34.022: INFO: rc: 1 Oct 23 01:36:34.022: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:34.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:36:34.735: INFO: rc: 1 Oct 23 01:36:34.735: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:35.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:36:35.583: INFO: rc: 1 Oct 23 01:36:35.583: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:36.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:36:36.556: INFO: rc: 1 Oct 23 01:36:36.556: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:37.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:36:37.937: INFO: rc: 1 Oct 23 01:36:37.937: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:38.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:36:38.710: INFO: rc: 1 Oct 23 01:36:38.710: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:39.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:36:39.653: INFO: rc: 1 Oct 23 01:36:39.653: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:40.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:36:40.860: INFO: rc: 1 Oct 23 01:36:40.860: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:41.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:36:42.214: INFO: rc: 1 Oct 23 01:36:42.214: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:42.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:36:42.806: INFO: rc: 1 Oct 23 01:36:42.806: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:43.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:36:43.956: INFO: rc: 1 Oct 23 01:36:43.956: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:44.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:36:44.504: INFO: rc: 1 Oct 23 01:36:44.504: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:45.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:36:45.587: INFO: rc: 1 Oct 23 01:36:45.587: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:46.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:36:46.556: INFO: rc: 1 Oct 23 01:36:46.556: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:47.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:36:47.474: INFO: rc: 1 Oct 23 01:36:47.474: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:48.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:36:48.669: INFO: rc: 1 Oct 23 01:36:48.669: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:49.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:36:50.196: INFO: rc: 1 Oct 23 01:36:50.196: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:50.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:36:50.919: INFO: rc: 1 Oct 23 01:36:50.919: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:51.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:36:52.028: INFO: rc: 1 Oct 23 01:36:52.028: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:52.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:36:52.655: INFO: rc: 1 Oct 23 01:36:52.655: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:53.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:36:53.700: INFO: rc: 1 Oct 23 01:36:53.700: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:54.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:36:54.509: INFO: rc: 1 Oct 23 01:36:54.509: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:55.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:36:55.558: INFO: rc: 1 Oct 23 01:36:55.558: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:56.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:36:56.546: INFO: rc: 1 Oct 23 01:36:56.546: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:57.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:36:57.601: INFO: rc: 1 Oct 23 01:36:57.601: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:58.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:36:58.488: INFO: rc: 1 Oct 23 01:36:58.488: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:58.488: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670' Oct 23 01:36:59.010: INFO: rc: 1 Oct 23 01:36:59.010: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2514 exec execpod5fmgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32670: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32670 + echo hostName nc: connect to 10.10.190.207 port 32670 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:59.011: FAIL: Unexpected error: <*errors.errorString | 0xc0048124d0>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32670 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32670 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func24.11() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1169 +0x265 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000782d80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc000782d80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc000782d80, 0x70e7b58) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-2514". STEP: Found 17 events. Oct 23 01:36:59.016: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpod5fmgl: { } Scheduled: Successfully assigned services-2514/execpod5fmgl to node2 Oct 23 01:36:59.016: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for nodeport-test-dzdbc: { } Scheduled: Successfully assigned services-2514/nodeport-test-dzdbc to node1 Oct 23 01:36:59.016: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for nodeport-test-ml4fp: { } Scheduled: Successfully assigned services-2514/nodeport-test-ml4fp to node1 Oct 23 01:36:59.016: INFO: At 2021-10-23 01:34:34 +0000 UTC - event for nodeport-test: {replication-controller } SuccessfulCreate: Created pod: nodeport-test-dzdbc Oct 23 01:36:59.016: INFO: At 2021-10-23 01:34:34 +0000 UTC - event for nodeport-test: {replication-controller } SuccessfulCreate: Created pod: nodeport-test-ml4fp Oct 23 01:36:59.016: INFO: At 2021-10-23 01:34:41 +0000 UTC - event for nodeport-test-dzdbc: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 23 01:36:59.016: INFO: At 2021-10-23 01:34:41 +0000 UTC - event for nodeport-test-ml4fp: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 23 01:36:59.016: INFO: At 2021-10-23 01:34:41 +0000 UTC - event for nodeport-test-ml4fp: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 431.197743ms Oct 23 01:36:59.016: INFO: At 2021-10-23 01:34:42 +0000 UTC - event for nodeport-test-ml4fp: {kubelet node1} Created: Created container nodeport-test Oct 23 01:36:59.016: INFO: At 2021-10-23 01:34:42 +0000 UTC - event for nodeport-test-ml4fp: {kubelet node1} Started: Started container nodeport-test Oct 23 01:36:59.016: INFO: At 2021-10-23 01:34:47 +0000 UTC - event for nodeport-test-dzdbc: {kubelet node1} Started: Started container nodeport-test Oct 23 01:36:59.016: INFO: At 2021-10-23 01:34:47 +0000 UTC - event for nodeport-test-dzdbc: {kubelet node1} Created: Created container nodeport-test Oct 23 01:36:59.016: INFO: At 2021-10-23 01:34:47 +0000 UTC - event for nodeport-test-dzdbc: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 6.047731233s Oct 23 01:36:59.016: INFO: At 2021-10-23 01:34:52 +0000 UTC - event for execpod5fmgl: {kubelet node2} Started: Started container agnhost-container Oct 23 01:36:59.016: INFO: At 2021-10-23 01:34:52 +0000 UTC - event for execpod5fmgl: {kubelet node2} Created: Created container agnhost-container Oct 23 01:36:59.016: INFO: At 2021-10-23 01:34:52 +0000 UTC - event for execpod5fmgl: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 23 01:36:59.016: INFO: At 2021-10-23 01:34:52 +0000 UTC - event for execpod5fmgl: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 318.102288ms Oct 23 01:36:59.019: INFO: POD NODE PHASE GRACE CONDITIONS Oct 23 01:36:59.019: INFO: execpod5fmgl node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:34:49 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:34:53 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:34:53 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:34:49 +0000 UTC }] Oct 23 01:36:59.019: INFO: nodeport-test-dzdbc node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:34:35 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:34:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:34:48 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:34:34 +0000 UTC }] Oct 23 01:36:59.019: INFO: nodeport-test-ml4fp node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:34:35 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:34:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:34:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:34:34 +0000 UTC }] Oct 23 01:36:59.020: INFO: Oct 23 01:36:59.023: INFO: Logging node info for node master1 Oct 23 01:36:59.026: INFO: Node Info: &Node{ObjectMeta:{master1 1b0e9b6c-fa73-4303-880f-3c662903b3ba 97910 0 2021-10-22 21:03:37 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-22 21:03:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2021-10-22 21:03:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2021-10-22 21:06:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2021-10-22 21:11:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:07 +0000 UTC,LastTransitionTime:2021-10-22 21:09:07 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 01:36:49 +0000 UTC,LastTransitionTime:2021-10-22 21:03:34 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 01:36:49 +0000 UTC,LastTransitionTime:2021-10-22 21:03:34 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 01:36:49 +0000 UTC,LastTransitionTime:2021-10-22 21:03:34 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 01:36:49 +0000 UTC,LastTransitionTime:2021-10-22 21:09:03 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:30ce143f9c9243b59253027a77cdbf77,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:e78651c4-73ca-42e7-8083-bc7c7ebac4b6,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:519ce66d3ef90d7545f5679b670aa50393adfbe9785a720ba26ce3ec4b263c5d tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:2012644549052fa07c43b0d19f320c871a25e105d0b23e33645e4f1bcf8fcd97 nginx:1.20.1-alpine],SizeBytes:22650454,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 01:36:59.026: INFO: Logging kubelet events for node master1 Oct 23 01:36:59.030: INFO: Logging pods the kubelet thinks is on node master1 Oct 23 01:36:59.047: INFO: kube-proxy-fhqkt started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 01:36:59.047: INFO: Container kube-proxy ready: true, restart count 1 Oct 23 01:36:59.047: INFO: kube-flannel-8vnf2 started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 01:36:59.047: INFO: Init container install-cni ready: true, restart count 1 Oct 23 01:36:59.047: INFO: Container kube-flannel ready: true, restart count 1 Oct 23 01:36:59.047: INFO: kube-multus-ds-amd64-vl8qj started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 01:36:59.047: INFO: Container kube-multus ready: true, restart count 1 Oct 23 01:36:59.047: INFO: coredns-8474476ff8-q8d8x started at 2021-10-22 21:07:01 +0000 UTC (0+1 container statuses recorded) Oct 23 01:36:59.047: INFO: Container coredns ready: true, restart count 2 Oct 23 01:36:59.047: INFO: container-registry-65d7c44b96-wtz5j started at 2021-10-22 21:10:37 +0000 UTC (0+2 container statuses recorded) Oct 23 01:36:59.047: INFO: Container docker-registry ready: true, restart count 0 Oct 23 01:36:59.047: INFO: Container nginx ready: true, restart count 0 Oct 23 01:36:59.047: INFO: node-exporter-fxb7q started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 01:36:59.047: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 01:36:59.047: INFO: Container node-exporter ready: true, restart count 0 Oct 23 01:36:59.047: INFO: kube-apiserver-master1 started at 2021-10-22 21:09:03 +0000 UTC (0+1 container statuses recorded) Oct 23 01:36:59.047: INFO: Container kube-apiserver ready: true, restart count 0 Oct 23 01:36:59.047: INFO: kube-controller-manager-master1 started at 2021-10-22 21:12:54 +0000 UTC (0+1 container statuses recorded) Oct 23 01:36:59.047: INFO: Container kube-controller-manager ready: true, restart count 1 Oct 23 01:36:59.047: INFO: kube-scheduler-master1 started at 2021-10-22 21:22:33 +0000 UTC (0+1 container statuses recorded) Oct 23 01:36:59.047: INFO: Container kube-scheduler ready: true, restart count 0 W1023 01:36:59.062338 29 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 01:36:59.132: INFO: Latency metrics for node master1 Oct 23 01:36:59.132: INFO: Logging node info for node master2 Oct 23 01:36:59.136: INFO: Node Info: &Node{ObjectMeta:{master2 48070097-b11c-473d-9240-f4ee02bd7e2f 97979 0 2021-10-22 21:04:08 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-22 21:04:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-10-22 21:17:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:14 +0000 UTC,LastTransitionTime:2021-10-22 21:09:14 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 01:36:53 +0000 UTC,LastTransitionTime:2021-10-22 21:04:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 01:36:53 +0000 UTC,LastTransitionTime:2021-10-22 21:04:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 01:36:53 +0000 UTC,LastTransitionTime:2021-10-22 21:04:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 01:36:53 +0000 UTC,LastTransitionTime:2021-10-22 21:06:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c5d510cf1060448cb87a1d02cd1f2972,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:8ec7c43d-60d2-4abb-84a1-5a37f3283118,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 01:36:59.136: INFO: Logging kubelet events for node master2 Oct 23 01:36:59.142: INFO: Logging pods the kubelet thinks is on node master2 Oct 23 01:36:59.158: INFO: kube-controller-manager-master2 started at 2021-10-22 21:12:54 +0000 UTC (0+1 container statuses recorded) Oct 23 01:36:59.158: INFO: Container kube-controller-manager ready: true, restart count 2 Oct 23 01:36:59.158: INFO: kube-scheduler-master2 started at 2021-10-22 21:12:54 +0000 UTC (0+1 container statuses recorded) Oct 23 01:36:59.158: INFO: Container kube-scheduler ready: true, restart count 2 Oct 23 01:36:59.158: INFO: kube-proxy-2xlf2 started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 01:36:59.158: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 01:36:59.158: INFO: kube-flannel-tfkj9 started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 01:36:59.158: INFO: Init container install-cni ready: true, restart count 2 Oct 23 01:36:59.158: INFO: Container kube-flannel ready: true, restart count 1 Oct 23 01:36:59.158: INFO: kube-multus-ds-amd64-m8ztc started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 01:36:59.158: INFO: Container kube-multus ready: true, restart count 1 Oct 23 01:36:59.158: INFO: kube-apiserver-master2 started at 2021-10-22 21:04:46 +0000 UTC (0+1 container statuses recorded) Oct 23 01:36:59.158: INFO: Container kube-apiserver ready: true, restart count 0 Oct 23 01:36:59.158: INFO: dns-autoscaler-7df78bfcfb-9ss69 started at 2021-10-22 21:06:58 +0000 UTC (0+1 container statuses recorded) Oct 23 01:36:59.158: INFO: Container autoscaler ready: true, restart count 1 Oct 23 01:36:59.158: INFO: node-exporter-vljkh started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 01:36:59.158: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 01:36:59.158: INFO: Container node-exporter ready: true, restart count 0 W1023 01:36:59.173654 29 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 01:36:59.243: INFO: Latency metrics for node master2 Oct 23 01:36:59.243: INFO: Logging node info for node master3 Oct 23 01:36:59.246: INFO: Node Info: &Node{ObjectMeta:{master3 fe22a467-e2de-4b64-9399-d274e6d13231 98118 0 2021-10-22 21:04:18 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-22 21:04:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-10-22 21:14:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2021-10-22 21:14:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:08 +0000 UTC,LastTransitionTime:2021-10-22 21:09:08 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 01:36:58 +0000 UTC,LastTransitionTime:2021-10-22 21:04:18 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 01:36:58 +0000 UTC,LastTransitionTime:2021-10-22 21:04:18 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 01:36:58 +0000 UTC,LastTransitionTime:2021-10-22 21:04:18 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 01:36:58 +0000 UTC,LastTransitionTime:2021-10-22 21:09:03 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:55ed55d7ecb94c5fbcecb32cb3747801,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:7e00baa8-f631-4d7e-baa1-cb915fbb1ea7,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 01:36:59.247: INFO: Logging kubelet events for node master3 Oct 23 01:36:59.249: INFO: Logging pods the kubelet thinks is on node master3 Oct 23 01:36:59.259: INFO: kube-scheduler-master3 started at 2021-10-22 21:04:46 +0000 UTC (0+1 container statuses recorded) Oct 23 01:36:59.259: INFO: Container kube-scheduler ready: true, restart count 2 Oct 23 01:36:59.259: INFO: kube-multus-ds-amd64-tfbmd started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 01:36:59.259: INFO: Container kube-multus ready: true, restart count 1 Oct 23 01:36:59.259: INFO: coredns-8474476ff8-7wlfp started at 2021-10-22 21:06:56 +0000 UTC (0+1 container statuses recorded) Oct 23 01:36:59.259: INFO: Container coredns ready: true, restart count 2 Oct 23 01:36:59.259: INFO: node-feature-discovery-controller-cff799f9f-dgsfd started at 2021-10-22 21:14:11 +0000 UTC (0+1 container statuses recorded) Oct 23 01:36:59.259: INFO: Container nfd-controller ready: true, restart count 0 Oct 23 01:36:59.259: INFO: node-exporter-b22mw started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 01:36:59.259: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 01:36:59.259: INFO: Container node-exporter ready: true, restart count 0 Oct 23 01:36:59.259: INFO: kube-apiserver-master3 started at 2021-10-22 21:09:03 +0000 UTC (0+1 container statuses recorded) Oct 23 01:36:59.259: INFO: Container kube-apiserver ready: true, restart count 0 Oct 23 01:36:59.259: INFO: kube-controller-manager-master3 started at 2021-10-22 21:09:03 +0000 UTC (0+1 container statuses recorded) Oct 23 01:36:59.259: INFO: Container kube-controller-manager ready: true, restart count 2 Oct 23 01:36:59.259: INFO: kube-proxy-l7st4 started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 01:36:59.259: INFO: Container kube-proxy ready: true, restart count 1 Oct 23 01:36:59.259: INFO: kube-flannel-rf9mv started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 01:36:59.259: INFO: Init container install-cni ready: true, restart count 1 Oct 23 01:36:59.259: INFO: Container kube-flannel ready: true, restart count 1 W1023 01:36:59.273122 29 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 01:36:59.354: INFO: Latency metrics for node master3 Oct 23 01:36:59.354: INFO: Logging node info for node node1 Oct 23 01:36:59.357: INFO: Node Info: &Node{ObjectMeta:{node1 1c590bf6-8845-4681-8fa1-7acc55183d29 98144 0 2021-10-22 21:05:23 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-10-22 21:05:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-10-22 21:05:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-22 21:14:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-22 21:17:46 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-10-23 01:19:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:10 +0000 UTC,LastTransitionTime:2021-10-22 21:09:10 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 01:36:59 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 01:36:59 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 01:36:59 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 01:36:59 +0000 UTC,LastTransitionTime:2021-10-22 21:06:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f11a4b4c58ac4a4eb06ac043edeefa84,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:50e64d70-ffd2-496a-957a-81f1931a6b6e,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003429679,},ContainerImage{Names:[localhost:30500/cmk@sha256:ba2eda55192ece5488254511709b932e8a99f600af8261a9f2a89d0dbc9b8fec cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:c3256608afd18299ac7559d97ec0a80149d265b35d2eeeb33a053826e486886a nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:a296b4c6f6ee2b88f095b61e95c7ef4f51ba25598835b4978c9256d8c8ace48a alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 01:36:59.357: INFO: Logging kubelet events for node node1 Oct 23 01:36:59.359: INFO: Logging pods the kubelet thinks is on node node1 Oct 23 01:36:59.377: INFO: kube-flannel-2cdvd started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 01:36:59.377: INFO: Init container install-cni ready: true, restart count 2 Oct 23 01:36:59.377: INFO: Container kube-flannel ready: true, restart count 3 Oct 23 01:36:59.377: INFO: kubernetes-metrics-scraper-5558854cb-dfn2n started at 2021-10-22 21:07:01 +0000 UTC (0+1 container statuses recorded) Oct 23 01:36:59.377: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 23 01:36:59.377: INFO: node-exporter-v656r started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 01:36:59.377: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 01:36:59.377: INFO: Container node-exporter ready: true, restart count 0 Oct 23 01:36:59.377: INFO: affinity-nodeport-6nj64 started at 2021-10-23 01:36:18 +0000 UTC (0+1 container statuses recorded) Oct 23 01:36:59.377: INFO: Container affinity-nodeport ready: true, restart count 0 Oct 23 01:36:59.377: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-sjjtd started at 2021-10-22 21:15:26 +0000 UTC (0+1 container statuses recorded) Oct 23 01:36:59.378: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 01:36:59.378: INFO: server started at 2021-10-23 01:36:59 +0000 UTC (0+1 container statuses recorded) Oct 23 01:36:59.378: INFO: Container agnhost-container ready: false, restart count 0 Oct 23 01:36:59.378: INFO: affinity-nodeport-chcs4 started at 2021-10-23 01:36:18 +0000 UTC (0+1 container statuses recorded) Oct 23 01:36:59.378: INFO: Container affinity-nodeport ready: true, restart count 0 Oct 23 01:36:59.378: INFO: affinity-nodeport-44w4f started at 2021-10-23 01:36:18 +0000 UTC (0+1 container statuses recorded) Oct 23 01:36:59.378: INFO: Container affinity-nodeport ready: true, restart count 0 Oct 23 01:36:59.378: INFO: affinity-nodeport-timeout-r4j9h started at 2021-10-23 01:36:21 +0000 UTC (0+1 container statuses recorded) Oct 23 01:36:59.378: INFO: Container affinity-nodeport-timeout ready: true, restart count 0 Oct 23 01:36:59.378: INFO: kubernetes-dashboard-785dcbb76d-kc4kh started at 2021-10-22 21:07:01 +0000 UTC (0+1 container statuses recorded) Oct 23 01:36:59.378: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 23 01:36:59.378: INFO: prometheus-k8s-0 started at 2021-10-22 21:19:48 +0000 UTC (0+4 container statuses recorded) Oct 23 01:36:59.378: INFO: Container config-reloader ready: true, restart count 0 Oct 23 01:36:59.378: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 23 01:36:59.378: INFO: Container grafana ready: true, restart count 0 Oct 23 01:36:59.378: INFO: Container prometheus ready: true, restart count 1 Oct 23 01:36:59.378: INFO: collectd-n9sbv started at 2021-10-22 21:23:20 +0000 UTC (0+3 container statuses recorded) Oct 23 01:36:59.378: INFO: Container collectd ready: true, restart count 0 Oct 23 01:36:59.378: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 01:36:59.378: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 01:36:59.378: INFO: nodeport-test-ml4fp started at 2021-10-23 01:34:35 +0000 UTC (0+1 container statuses recorded) Oct 23 01:36:59.378: INFO: Container nodeport-test ready: true, restart count 0 Oct 23 01:36:59.378: INFO: pod-with-prestop-http-hook started at 2021-10-23 01:36:55 +0000 UTC (0+1 container statuses recorded) Oct 23 01:36:59.378: INFO: Container pod-with-prestop-http-hook ready: false, restart count 0 Oct 23 01:36:59.378: INFO: execpod-affinity779cd started at 2021-10-23 01:36:27 +0000 UTC (0+1 container statuses recorded) Oct 23 01:36:59.378: INFO: Container agnhost-container ready: true, restart count 0 Oct 23 01:36:59.378: INFO: prometheus-operator-585ccfb458-hwjk2 started at 2021-10-22 21:19:21 +0000 UTC (0+2 container statuses recorded) Oct 23 01:36:59.378: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 01:36:59.378: INFO: Container prometheus-operator ready: true, restart count 0 Oct 23 01:36:59.378: INFO: kube-proxy-m9z8s started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 01:36:59.378: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 01:36:59.378: INFO: kube-multus-ds-amd64-l97s4 started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 01:36:59.378: INFO: Container kube-multus ready: true, restart count 1 Oct 23 01:36:59.378: INFO: node-feature-discovery-worker-2pvq5 started at 2021-10-22 21:14:11 +0000 UTC (0+1 container statuses recorded) Oct 23 01:36:59.378: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 01:36:59.378: INFO: simpletest-rc-to-be-deleted-5xp7k started at 2021-10-23 01:35:51 +0000 UTC (0+1 container statuses recorded) Oct 23 01:36:59.378: INFO: Container nginx ready: true, restart count 0 Oct 23 01:36:59.378: INFO: cmk-init-discover-node1-c599w started at 2021-10-22 21:17:43 +0000 UTC (0+3 container statuses recorded) Oct 23 01:36:59.378: INFO: Container discover ready: false, restart count 0 Oct 23 01:36:59.378: INFO: Container init ready: false, restart count 0 Oct 23 01:36:59.378: INFO: Container install ready: false, restart count 0 Oct 23 01:36:59.378: INFO: cmk-t9r2t started at 2021-10-22 21:18:25 +0000 UTC (0+2 container statuses recorded) Oct 23 01:36:59.378: INFO: Container nodereport ready: true, restart count 0 Oct 23 01:36:59.378: INFO: Container reconcile ready: true, restart count 0 Oct 23 01:36:59.378: INFO: nginx-proxy-node1 started at 2021-10-22 21:05:23 +0000 UTC (0+1 container statuses recorded) Oct 23 01:36:59.378: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 01:36:59.378: INFO: nodeport-test-dzdbc started at 2021-10-23 01:34:35 +0000 UTC (0+1 container statuses recorded) Oct 23 01:36:59.378: INFO: Container nodeport-test ready: true, restart count 0 Oct 23 01:36:59.378: INFO: sample-webhook-deployment-78988fc6cd-wd4w8 started at 2021-10-23 01:36:56 +0000 UTC (0+1 container statuses recorded) Oct 23 01:36:59.378: INFO: Container sample-webhook ready: false, restart count 0 W1023 01:36:59.391781 29 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 01:37:01.151: INFO: Latency metrics for node node1 Oct 23 01:37:01.151: INFO: Logging node info for node node2 Oct 23 01:37:01.153: INFO: Node Info: &Node{ObjectMeta:{node2 bdba54c1-d4eb-4c09-a343-50f320ccb048 98068 0 2021-10-22 21:05:23 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-10-22 21:05:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-10-22 21:05:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-22 21:14:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-22 21:18:08 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-10-23 01:20:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:example.com/fakecpu":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {e2e.test Update v1 2021-10-23 01:28:00 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:08 +0000 UTC,LastTransitionTime:2021-10-22 21:09:08 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 01:36:56 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 01:36:56 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 01:36:56 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 01:36:56 +0000 UTC,LastTransitionTime:2021-10-22 21:06:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:82312646736a4d47a5e2182417308818,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:045f38e2-ca45-4931-a8ac-a14f5e34cbd2,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[localhost:30500/cmk@sha256:ba2eda55192ece5488254511709b932e8a99f600af8261a9f2a89d0dbc9b8fec localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:c3256608afd18299ac7559d97ec0a80149d265b35d2eeeb33a053826e486886a localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[localhost:30500/tasextender@sha256:519ce66d3ef90d7545f5679b670aa50393adfbe9785a720ba26ce3ec4b263c5d localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 01:37:01.154: INFO: Logging kubelet events for node node2 Oct 23 01:37:01.156: INFO: Logging pods the kubelet thinks is on node node2 Oct 23 01:37:01.299: INFO: kube-flannel-xx6ls started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 01:37:01.299: INFO: Init container install-cni ready: true, restart count 1 Oct 23 01:37:01.299: INFO: Container kube-flannel ready: true, restart count 2 Oct 23 01:37:01.299: INFO: tas-telemetry-aware-scheduling-84ff454dfb-gltgg started at 2021-10-22 21:22:32 +0000 UTC (0+1 container statuses recorded) Oct 23 01:37:01.299: INFO: Container tas-extender ready: true, restart count 0 Oct 23 01:37:01.299: INFO: simpletest-rc-to-be-deleted-2bs5t started at 2021-10-23 01:35:51 +0000 UTC (0+1 container statuses recorded) Oct 23 01:37:01.299: INFO: Container nginx ready: true, restart count 0 Oct 23 01:37:01.299: INFO: pod-74e7fd6c-47b7-4dee-bbbb-d9cf88909b71 started at 2021-10-23 01:36:59 +0000 UTC (0+1 container statuses recorded) Oct 23 01:37:01.299: INFO: Container test-container ready: false, restart count 0 Oct 23 01:37:01.299: INFO: collectd-xhdgw started at 2021-10-22 21:23:20 +0000 UTC (0+3 container statuses recorded) Oct 23 01:37:01.299: INFO: Container collectd ready: true, restart count 0 Oct 23 01:37:01.299: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 01:37:01.299: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 01:37:01.299: INFO: simpletest-rc-to-be-deleted-6l9dw started at 2021-10-23 01:35:51 +0000 UTC (0+1 container statuses recorded) Oct 23 01:37:01.299: INFO: Container nginx ready: true, restart count 0 Oct 23 01:37:01.299: INFO: nginx-proxy-node2 started at 2021-10-22 21:05:23 +0000 UTC (0+1 container statuses recorded) Oct 23 01:37:01.299: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 01:37:01.299: INFO: simpletest-rc-to-be-deleted-cdw8c started at 2021-10-23 01:35:51 +0000 UTC (0+1 container statuses recorded) Oct 23 01:37:01.299: INFO: Container nginx ready: true, restart count 0 Oct 23 01:37:01.299: INFO: execpod5fmgl started at 2021-10-23 01:34:49 +0000 UTC (0+1 container statuses recorded) Oct 23 01:37:01.299: INFO: Container agnhost-container ready: true, restart count 0 Oct 23 01:37:01.299: INFO: cmk-kn29k started at 2021-10-22 21:18:25 +0000 UTC (0+2 container statuses recorded) Oct 23 01:37:01.299: INFO: Container nodereport ready: true, restart count 1 Oct 23 01:37:01.299: INFO: Container reconcile ready: true, restart count 0 Oct 23 01:37:01.299: INFO: affinity-nodeport-timeout-db4ph started at 2021-10-23 01:36:21 +0000 UTC (0+1 container statuses recorded) Oct 23 01:37:01.299: INFO: Container affinity-nodeport-timeout ready: true, restart count 0 Oct 23 01:37:01.299: INFO: pod-0 started at 2021-10-23 01:36:59 +0000 UTC (0+1 container statuses recorded) Oct 23 01:37:01.299: INFO: Container donothing ready: false, restart count 0 Oct 23 01:37:01.299: INFO: kube-proxy-5h2bl started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 01:37:01.299: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 01:37:01.299: INFO: cmk-init-discover-node2-2btnq started at 2021-10-22 21:18:03 +0000 UTC (0+3 container statuses recorded) Oct 23 01:37:01.299: INFO: Container discover ready: false, restart count 0 Oct 23 01:37:01.299: INFO: Container init ready: false, restart count 0 Oct 23 01:37:01.299: INFO: Container install ready: false, restart count 0 Oct 23 01:37:01.299: INFO: cmk-webhook-6c9d5f8578-pkwhc started at 2021-10-22 21:18:26 +0000 UTC (0+1 container statuses recorded) Oct 23 01:37:01.299: INFO: Container cmk-webhook ready: true, restart count 0 Oct 23 01:37:01.299: INFO: affinity-nodeport-timeout-brs4b started at 2021-10-23 01:36:21 +0000 UTC (0+1 container statuses recorded) Oct 23 01:37:01.299: INFO: Container affinity-nodeport-timeout ready: true, restart count 0 Oct 23 01:37:01.299: INFO: kube-multus-ds-amd64-fww5b started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 01:37:01.299: INFO: Container kube-multus ready: true, restart count 1 Oct 23 01:37:01.299: INFO: node-feature-discovery-worker-8k8m5 started at 2021-10-22 21:14:11 +0000 UTC (0+1 container statuses recorded) Oct 23 01:37:01.299: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 01:37:01.299: INFO: execpod-affinitypgknc started at 2021-10-23 01:36:27 +0000 UTC (0+1 container statuses recorded) Oct 23 01:37:01.299: INFO: Container agnhost-container ready: true, restart count 0 Oct 23 01:37:01.299: INFO: annotationupdate96476825-d2df-4861-9131-2f4ec8f36eec started at 2021-10-23 01:36:50 +0000 UTC (0+1 container statuses recorded) Oct 23 01:37:01.299: INFO: Container client-container ready: true, restart count 0 Oct 23 01:37:01.299: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zhcfq started at 2021-10-22 21:15:26 +0000 UTC (0+1 container statuses recorded) Oct 23 01:37:01.299: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 01:37:01.299: INFO: node-exporter-fjc79 started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 01:37:01.299: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 01:37:01.299: INFO: Container node-exporter ready: true, restart count 0 Oct 23 01:37:01.299: INFO: pod-handle-http-request started at 2021-10-23 01:36:47 +0000 UTC (0+1 container statuses recorded) Oct 23 01:37:01.300: INFO: Container agnhost-container ready: true, restart count 0 Oct 23 01:37:01.300: INFO: liveness-dee26e75-4741-4cb4-8805-e175506ec9c2 started at 2021-10-23 01:35:29 +0000 UTC (0+1 container statuses recorded) Oct 23 01:37:01.300: INFO: Container agnhost-container ready: true, restart count 4 Oct 23 01:37:01.300: INFO: annotationupdate47dda030-6998-4f50-a1af-6036c0c3ec37 started at 2021-10-23 01:36:38 +0000 UTC (0+1 container statuses recorded) Oct 23 01:37:01.300: INFO: Container client-container ready: false, restart count 0 Oct 23 01:37:01.300: INFO: simpletest-rc-to-be-deleted-kjc9s started at 2021-10-23 01:35:51 +0000 UTC (0+1 container statuses recorded) Oct 23 01:37:01.300: INFO: Container nginx ready: true, restart count 0 W1023 01:37:01.318950 29 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 01:37:01.917: INFO: Latency metrics for node node2 Oct 23 01:37:01.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2514" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [147.106 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to create a functioning NodePort service [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 01:36:59.011: Unexpected error: <*errors.errorString | 0xc0048124d0>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32670 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32670 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1169 ------------------------------ {"msg":"FAILED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":-1,"completed":17,"skipped":291,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:36:57.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should update/patch PodDisruptionBudget status [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Waiting for the pdb to be processed STEP: Updating PodDisruptionBudget status STEP: Waiting for all pods to be running Oct 23 01:36:59.111: INFO: running pods: 0 < 1 Oct 23 01:37:01.115: INFO: running pods: 0 < 1 STEP: locating a running pod STEP: Waiting for the pdb to be processed STEP: Patching PodDisruptionBudget status STEP: Waiting for the pdb to be processed [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:37:03.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-4092" for this suite. • [SLOW TEST:6.087 seconds] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update/patch PodDisruptionBudget status [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]","total":-1,"completed":19,"skipped":243,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:35:51.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W1023 01:36:01.289215 33 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 01:37:03.305: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. Oct 23 01:37:03.305: INFO: Deleting pod "simpletest-rc-to-be-deleted-2bs5t" in namespace "gc-4906" Oct 23 01:37:03.312: INFO: Deleting pod "simpletest-rc-to-be-deleted-5xp7k" in namespace "gc-4906" Oct 23 01:37:03.318: INFO: Deleting pod "simpletest-rc-to-be-deleted-6l9dw" in namespace "gc-4906" Oct 23 01:37:03.323: INFO: Deleting pod "simpletest-rc-to-be-deleted-cdw8c" in namespace "gc-4906" Oct 23 01:37:03.328: INFO: Deleting pod "simpletest-rc-to-be-deleted-kjc9s" in namespace "gc-4906" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:37:03.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4906" for this suite. • [SLOW TEST:72.140 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":-1,"completed":25,"skipped":323,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:36:59.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0644 on node default medium Oct 23 01:36:59.371: INFO: Waiting up to 5m0s for pod "pod-74e7fd6c-47b7-4dee-bbbb-d9cf88909b71" in namespace "emptydir-8713" to be "Succeeded or Failed" Oct 23 01:36:59.374: INFO: Pod "pod-74e7fd6c-47b7-4dee-bbbb-d9cf88909b71": Phase="Pending", Reason="", readiness=false. Elapsed: 2.82097ms Oct 23 01:37:01.376: INFO: Pod "pod-74e7fd6c-47b7-4dee-bbbb-d9cf88909b71": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005130771s Oct 23 01:37:03.380: INFO: Pod "pod-74e7fd6c-47b7-4dee-bbbb-d9cf88909b71": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009112465s Oct 23 01:37:05.383: INFO: Pod "pod-74e7fd6c-47b7-4dee-bbbb-d9cf88909b71": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.011872794s STEP: Saw pod success Oct 23 01:37:05.383: INFO: Pod "pod-74e7fd6c-47b7-4dee-bbbb-d9cf88909b71" satisfied condition "Succeeded or Failed" Oct 23 01:37:05.385: INFO: Trying to get logs from node node2 pod pod-74e7fd6c-47b7-4dee-bbbb-d9cf88909b71 container test-container: STEP: delete the pod Oct 23 01:37:05.397: INFO: Waiting for pod pod-74e7fd6c-47b7-4dee-bbbb-d9cf88909b71 to disappear Oct 23 01:37:05.399: INFO: Pod pod-74e7fd6c-47b7-4dee-bbbb-d9cf88909b71 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:37:05.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8713" for this suite. • [SLOW TEST:6.070 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":582,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]} SSSS ------------------------------ [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:36:47.050: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. Oct 23 01:36:47.085: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:36:49.088: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:36:51.089: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:36:53.087: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:36:55.088: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the pod with lifecycle hook Oct 23 01:36:55.104: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:36:57.110: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:36:59.110: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:37:01.112: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:37:03.108: INFO: The status of Pod pod-with-prestop-http-hook is Running (Ready = true) STEP: delete the pod with lifecycle hook Oct 23 01:37:03.116: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Oct 23 01:37:03.118: INFO: Pod pod-with-prestop-http-hook still exists Oct 23 01:37:05.119: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Oct 23 01:37:05.121: INFO: Pod pod-with-prestop-http-hook still exists Oct 23 01:37:07.121: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Oct 23 01:37:07.123: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:37:07.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9688" for this suite. • [SLOW TEST:20.089 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":226,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:37:01.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 01:37:01.976: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes Oct 23 01:37:01.991: INFO: The status of Pod pod-exec-websocket-2d8bbdda-26cf-473d-8e90-e50eb6daede9 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:37:03.994: INFO: The status of Pod pod-exec-websocket-2d8bbdda-26cf-473d-8e90-e50eb6daede9 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:37:05.995: INFO: The status of Pod pod-exec-websocket-2d8bbdda-26cf-473d-8e90-e50eb6daede9 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:37:07.994: INFO: The status of Pod pod-exec-websocket-2d8bbdda-26cf-473d-8e90-e50eb6daede9 is Running (Ready = true) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:37:08.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5565" for this suite. • [SLOW TEST:6.293 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":301,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:36:56.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 23 01:36:56.526: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 23 01:36:58.536: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549816, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549816, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549816, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549816, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 23 01:37:00.539: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549816, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549816, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549816, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549816, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 23 01:37:03.549: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Oct 23 01:37:11.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=webhook-4548 attach --namespace=webhook-4548 to-be-attached-pod -i -c=container1' Oct 23 01:37:11.754: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:37:11.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4548" for this suite. STEP: Destroying namespace "webhook-4548-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.721 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":-1,"completed":25,"skipped":467,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:37:07.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-projected-all-test-volume-90578cef-b202-4104-a7fa-52191256dca4 STEP: Creating secret with name secret-projected-all-test-volume-f3fc1853-cb99-4c4b-a777-a72471c0048e STEP: Creating a pod to test Check all projections for projected volume plugin Oct 23 01:37:07.204: INFO: Waiting up to 5m0s for pod "projected-volume-2cb5ff2f-b799-426d-9471-597d6e0bc386" in namespace "projected-2106" to be "Succeeded or Failed" Oct 23 01:37:07.206: INFO: Pod "projected-volume-2cb5ff2f-b799-426d-9471-597d6e0bc386": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057207ms Oct 23 01:37:09.210: INFO: Pod "projected-volume-2cb5ff2f-b799-426d-9471-597d6e0bc386": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005400097s Oct 23 01:37:11.213: INFO: Pod "projected-volume-2cb5ff2f-b799-426d-9471-597d6e0bc386": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009158927s Oct 23 01:37:13.217: INFO: Pod "projected-volume-2cb5ff2f-b799-426d-9471-597d6e0bc386": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01326073s STEP: Saw pod success Oct 23 01:37:13.217: INFO: Pod "projected-volume-2cb5ff2f-b799-426d-9471-597d6e0bc386" satisfied condition "Succeeded or Failed" Oct 23 01:37:13.220: INFO: Trying to get logs from node node1 pod projected-volume-2cb5ff2f-b799-426d-9471-597d6e0bc386 container projected-all-volume-test: STEP: delete the pod Oct 23 01:37:13.245: INFO: Waiting for pod projected-volume-2cb5ff2f-b799-426d-9471-597d6e0bc386 to disappear Oct 23 01:37:13.246: INFO: Pod projected-volume-2cb5ff2f-b799-426d-9471-597d6e0bc386 no longer exists [AfterEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:37:13.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2106" for this suite. • [SLOW TEST:6.092 seconds] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":237,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:37:08.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Oct 23 01:37:13.417: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:37:13.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-361" for this suite. • [SLOW TEST:5.073 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":373,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:37:05.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 01:37:05.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1032 create -f -' Oct 23 01:37:05.796: INFO: stderr: "" Oct 23 01:37:05.796: INFO: stdout: "replicationcontroller/agnhost-primary created\n" Oct 23 01:37:05.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1032 create -f -' Oct 23 01:37:06.129: INFO: stderr: "" Oct 23 01:37:06.129: INFO: stdout: "service/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Oct 23 01:37:07.132: INFO: Selector matched 1 pods for map[app:agnhost] Oct 23 01:37:07.132: INFO: Found 0 / 1 Oct 23 01:37:08.132: INFO: Selector matched 1 pods for map[app:agnhost] Oct 23 01:37:08.132: INFO: Found 0 / 1 Oct 23 01:37:09.137: INFO: Selector matched 1 pods for map[app:agnhost] Oct 23 01:37:09.138: INFO: Found 0 / 1 Oct 23 01:37:10.132: INFO: Selector matched 1 pods for map[app:agnhost] Oct 23 01:37:10.132: INFO: Found 0 / 1 Oct 23 01:37:11.132: INFO: Selector matched 1 pods for map[app:agnhost] Oct 23 01:37:11.132: INFO: Found 0 / 1 Oct 23 01:37:12.133: INFO: Selector matched 1 pods for map[app:agnhost] Oct 23 01:37:12.133: INFO: Found 0 / 1 Oct 23 01:37:13.132: INFO: Selector matched 1 pods for map[app:agnhost] Oct 23 01:37:13.132: INFO: Found 1 / 1 Oct 23 01:37:13.132: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Oct 23 01:37:13.136: INFO: Selector matched 1 pods for map[app:agnhost] Oct 23 01:37:13.136: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Oct 23 01:37:13.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1032 describe pod agnhost-primary-nw4dz' Oct 23 01:37:13.343: INFO: stderr: "" Oct 23 01:37:13.343: INFO: stdout: "Name: agnhost-primary-nw4dz\nNamespace: kubectl-1032\nPriority: 0\nNode: node2/10.10.190.208\nStart Time: Sat, 23 Oct 2021 01:37:05 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.4.161\"\n ],\n \"mac\": \"82:8d:12:b3:11:c7\",\n \"default\": true,\n \"dns\": {}\n }]\n k8s.v1.cni.cncf.io/networks-status:\n [{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.4.161\"\n ],\n \"mac\": \"82:8d:12:b3:11:c7\",\n \"default\": true,\n \"dns\": {}\n }]\n kubernetes.io/psp: collectd\nStatus: Running\nIP: 10.244.4.161\nIPs:\n IP: 10.244.4.161\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: docker://78ab6b26b53946b40800532d1c7884c348f38678f4b00a9c0091e36baa3079ce\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.32\n Image ID: docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Sat, 23 Oct 2021 01:37:12 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jxqzc (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-jxqzc:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 7s default-scheduler Successfully assigned kubectl-1032/agnhost-primary-nw4dz to node2\n Normal Pulling 2s kubelet Pulling image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\"\n Normal Pulled 2s kubelet Successfully pulled image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\" in 631.180596ms\n Normal Created 1s kubelet Created container agnhost-primary\n Normal Started 1s kubelet Started container agnhost-primary\n" Oct 23 01:37:13.344: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1032 describe rc agnhost-primary' Oct 23 01:37:13.545: INFO: stderr: "" Oct 23 01:37:13.545: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-1032\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.32\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 8s replication-controller Created pod: agnhost-primary-nw4dz\n" Oct 23 01:37:13.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1032 describe service agnhost-primary' Oct 23 01:37:13.711: INFO: stderr: "" Oct 23 01:37:13.711: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-1032\nLabels: app=agnhost\n role=primary\nAnnotations: \nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.233.19.18\nIPs: 10.233.19.18\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.4.161:6379\nSession Affinity: None\nEvents: \n" Oct 23 01:37:13.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1032 describe node master1' Oct 23 01:37:13.922: INFO: stderr: "" Oct 23 01:37:13.922: INFO: stdout: "Name: master1\nRoles: control-plane,master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=master1\n kubernetes.io/os=linux\n node-role.kubernetes.io/control-plane=\n node-role.kubernetes.io/master=\n node.kubernetes.io/exclude-from-external-load-balancers=\nAnnotations: flannel.alpha.coreos.com/backend-data: null\n flannel.alpha.coreos.com/backend-type: host-gw\n flannel.alpha.coreos.com/kube-subnet-manager: true\n flannel.alpha.coreos.com/public-ip: 10.10.190.202\n kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Fri, 22 Oct 2021 21:03:37 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: master1\n AcquireTime: \n RenewTime: Sat, 23 Oct 2021 01:37:12 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n NetworkUnavailable False Fri, 22 Oct 2021 21:09:07 +0000 Fri, 22 Oct 2021 21:09:07 +0000 FlannelIsUp Flannel is running on this node\n MemoryPressure False Sat, 23 Oct 2021 01:37:09 +0000 Fri, 22 Oct 2021 21:03:34 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sat, 23 Oct 2021 01:37:09 +0000 Fri, 22 Oct 2021 21:03:34 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sat, 23 Oct 2021 01:37:09 +0000 Fri, 22 Oct 2021 21:03:34 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sat, 23 Oct 2021 01:37:09 +0000 Fri, 22 Oct 2021 21:09:03 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 10.10.190.202\n Hostname: master1\nCapacity:\n cpu: 80\n ephemeral-storage: 439913340Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 196518324Ki\n pods: 110\nAllocatable:\n cpu: 79550m\n ephemeral-storage: 405424133473\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 195629492Ki\n pods: 110\nSystem Info:\n Machine ID: 30ce143f9c9243b59253027a77cdbf77\n System UUID: 00ACFB60-0631-E711-906E-0017A4403562\n Boot ID: e78651c4-73ca-42e7-8083-bc7c7ebac4b6\n Kernel Version: 3.10.0-1160.45.1.el7.x86_64\n OS Image: CentOS Linux 7 (Core)\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: docker://20.10.9\n Kubelet Version: v1.21.1\n Kube-Proxy Version: v1.21.1\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system container-registry-65d7c44b96-wtz5j 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4h26m\n kube-system coredns-8474476ff8-q8d8x 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 4h30m\n kube-system kube-apiserver-master1 250m (0%) 0 (0%) 0 (0%) 0 (0%) 4h24m\n kube-system kube-controller-manager-master1 200m (0%) 0 (0%) 0 (0%) 0 (0%) 4h32m\n kube-system kube-flannel-8vnf2 150m (0%) 300m (0%) 64M (0%) 500M (0%) 4h30m\n kube-system kube-multus-ds-amd64-vl8qj 100m (0%) 100m (0%) 90Mi (0%) 90Mi (0%) 4h30m\n kube-system kube-proxy-fhqkt 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4h31m\n kube-system kube-scheduler-master1 100m (0%) 0 (0%) 0 (0%) 0 (0%) 4h14m\n monitoring node-exporter-fxb7q 112m (0%) 270m (0%) 200Mi (0%) 220Mi (0%) 4h17m\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 1012m (1%) 670m (0%)\n memory 431140Ki (0%) 1003316480 (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" Oct 23 01:37:13.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1032 describe namespace kubectl-1032' Oct 23 01:37:14.083: INFO: stderr: "" Oct 23 01:37:14.083: INFO: stdout: "Name: kubectl-1032\nLabels: e2e-framework=kubectl\n e2e-run=90ff8cef-4e5e-4270-82cc-c737b9b42342\n kubernetes.io/metadata.name=kubectl-1032\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:37:14.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1032" for this suite. • [SLOW TEST:8.678 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1084 should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":-1,"completed":28,"skipped":586,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]} SSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:37:11.804: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0777 on node default medium Oct 23 01:37:11.842: INFO: Waiting up to 5m0s for pod "pod-d33e61a5-0d2c-4a0c-95fe-6e1f79068826" in namespace "emptydir-9001" to be "Succeeded or Failed" Oct 23 01:37:11.845: INFO: Pod "pod-d33e61a5-0d2c-4a0c-95fe-6e1f79068826": Phase="Pending", Reason="", readiness=false. Elapsed: 2.695049ms Oct 23 01:37:13.849: INFO: Pod "pod-d33e61a5-0d2c-4a0c-95fe-6e1f79068826": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006727543s Oct 23 01:37:15.854: INFO: Pod "pod-d33e61a5-0d2c-4a0c-95fe-6e1f79068826": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011065377s STEP: Saw pod success Oct 23 01:37:15.854: INFO: Pod "pod-d33e61a5-0d2c-4a0c-95fe-6e1f79068826" satisfied condition "Succeeded or Failed" Oct 23 01:37:15.856: INFO: Trying to get logs from node node2 pod pod-d33e61a5-0d2c-4a0c-95fe-6e1f79068826 container test-container: STEP: delete the pod Oct 23 01:37:15.924: INFO: Waiting for pod pod-d33e61a5-0d2c-4a0c-95fe-6e1f79068826 to disappear Oct 23 01:37:15.926: INFO: Pod pod-d33e61a5-0d2c-4a0c-95fe-6e1f79068826 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:37:15.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9001" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":473,"failed":0} SS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:37:15.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Oct 23 01:37:15.991: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-5646 1d7777ab-c651-4783-922e-91a42fc552d7 98731 0 2021-10-23 01:37:15 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2021-10-23 01:37:15 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Oct 23 01:37:15.991: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-5646 1d7777ab-c651-4783-922e-91a42fc552d7 98732 0 2021-10-23 01:37:15 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2021-10-23 01:37:15 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:37:15.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5646" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":-1,"completed":27,"skipped":475,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:37:03.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 01:37:03.443: INFO: Creating ReplicaSet my-hostname-basic-f9acbe0b-0b25-41db-80ad-9448038dfcf9 Oct 23 01:37:03.448: INFO: Pod name my-hostname-basic-f9acbe0b-0b25-41db-80ad-9448038dfcf9: Found 0 pods out of 1 Oct 23 01:37:08.451: INFO: Pod name my-hostname-basic-f9acbe0b-0b25-41db-80ad-9448038dfcf9: Found 1 pods out of 1 Oct 23 01:37:08.451: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-f9acbe0b-0b25-41db-80ad-9448038dfcf9" is running Oct 23 01:37:12.458: INFO: Pod "my-hostname-basic-f9acbe0b-0b25-41db-80ad-9448038dfcf9-wx5ck" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-23 01:37:03 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-23 01:37:03 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-f9acbe0b-0b25-41db-80ad-9448038dfcf9]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-23 01:37:03 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-f9acbe0b-0b25-41db-80ad-9448038dfcf9]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-23 01:37:03 +0000 UTC Reason: Message:}]) Oct 23 01:37:12.459: INFO: Trying to dial the pod Oct 23 01:37:17.470: INFO: Controller my-hostname-basic-f9acbe0b-0b25-41db-80ad-9448038dfcf9: Got expected result from replica 1 [my-hostname-basic-f9acbe0b-0b25-41db-80ad-9448038dfcf9-wx5ck]: "my-hostname-basic-f9acbe0b-0b25-41db-80ad-9448038dfcf9-wx5ck", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:37:17.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-4889" for this suite. • [SLOW TEST:14.058 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":-1,"completed":26,"skipped":370,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:37:17.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if Kubernetes control plane services is included in cluster-info [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: validating cluster-info Oct 23 01:37:17.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1782 cluster-info' Oct 23 01:37:17.712: INFO: stderr: "" Oct 23 01:37:17.712: INFO: stdout: "\x1b[0;32mKubernetes control plane\x1b[0m is running at \x1b[0;33mhttps://10.10.190.202:6443\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:37:17.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1782" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance]","total":-1,"completed":27,"skipped":398,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:37:13.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 23 01:37:13.646: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created Oct 23 01:37:15.655: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549833, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549833, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549833, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549833, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 23 01:37:18.666: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Oct 23 01:37:18.678: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:37:18.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5173" for this suite. STEP: Destroying namespace "webhook-5173-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.455 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":-1,"completed":18,"skipped":243,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:37:13.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1308 STEP: creating the pod Oct 23 01:37:13.469: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3367 create -f -' Oct 23 01:37:13.758: INFO: stderr: "" Oct 23 01:37:13.758: INFO: stdout: "pod/pause created\n" Oct 23 01:37:13.758: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Oct 23 01:37:13.758: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-3367" to be "running and ready" Oct 23 01:37:13.761: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.821034ms Oct 23 01:37:15.764: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006208701s Oct 23 01:37:17.768: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.009886769s Oct 23 01:37:17.768: INFO: Pod "pause" satisfied condition "running and ready" Oct 23 01:37:17.768: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: adding the label testing-label with value testing-label-value to a pod Oct 23 01:37:17.768: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3367 label pods pause testing-label=testing-label-value' Oct 23 01:37:17.929: INFO: stderr: "" Oct 23 01:37:17.929: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Oct 23 01:37:17.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3367 get pod pause -L testing-label' Oct 23 01:37:18.096: INFO: stderr: "" Oct 23 01:37:18.096: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s testing-label-value\n" STEP: removing the label testing-label of a pod Oct 23 01:37:18.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3367 label pods pause testing-label-' Oct 23 01:37:18.259: INFO: stderr: "" Oct 23 01:37:18.259: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Oct 23 01:37:18.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3367 get pod pause -L testing-label' Oct 23 01:37:18.416: INFO: stderr: "" Oct 23 01:37:18.416: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1314 STEP: using delete to clean up resources Oct 23 01:37:18.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3367 delete --grace-period=0 --force -f -' Oct 23 01:37:18.565: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 23 01:37:18.565: INFO: stdout: "pod \"pause\" force deleted\n" Oct 23 01:37:18.565: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3367 get rc,svc -l name=pause --no-headers' Oct 23 01:37:18.759: INFO: stderr: "No resources found in kubectl-3367 namespace.\n" Oct 23 01:37:18.759: INFO: stdout: "" Oct 23 01:37:18.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3367 get pods -l name=pause -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Oct 23 01:37:18.924: INFO: stderr: "" Oct 23 01:37:18.925: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:37:18.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3367" for this suite. • [SLOW TEST:5.486 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1306 should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:37:03.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:37:19.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8081" for this suite. • [SLOW TEST:16.191 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":-1,"completed":20,"skipped":264,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:36:58.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 [It] should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating server pod server in namespace prestop-6009 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-6009 STEP: Deleting pre-stop pod Oct 23 01:37:20.005: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:37:20.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-6009" for this suite. • [SLOW TEST:21.086 seconds] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":-1,"completed":15,"skipped":246,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:37:16.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-7a921057-909b-44ed-a22b-05b80431c8cb STEP: Creating a pod to test consume secrets Oct 23 01:37:16.062: INFO: Waiting up to 5m0s for pod "pod-secrets-cc525bbb-8ea6-4889-9758-9a8379146325" in namespace "secrets-1526" to be "Succeeded or Failed" Oct 23 01:37:16.064: INFO: Pod "pod-secrets-cc525bbb-8ea6-4889-9758-9a8379146325": Phase="Pending", Reason="", readiness=false. Elapsed: 2.575426ms Oct 23 01:37:18.068: INFO: Pod "pod-secrets-cc525bbb-8ea6-4889-9758-9a8379146325": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006376376s Oct 23 01:37:20.071: INFO: Pod "pod-secrets-cc525bbb-8ea6-4889-9758-9a8379146325": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009318629s STEP: Saw pod success Oct 23 01:37:20.071: INFO: Pod "pod-secrets-cc525bbb-8ea6-4889-9758-9a8379146325" satisfied condition "Succeeded or Failed" Oct 23 01:37:20.074: INFO: Trying to get logs from node node2 pod pod-secrets-cc525bbb-8ea6-4889-9758-9a8379146325 container secret-volume-test: STEP: delete the pod Oct 23 01:37:20.085: INFO: Waiting for pod pod-secrets-cc525bbb-8ea6-4889-9758-9a8379146325 to disappear Oct 23 01:37:20.086: INFO: Pod pod-secrets-cc525bbb-8ea6-4889-9758-9a8379146325 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:37:20.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1526" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":485,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:37:14.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Oct 23 01:37:14.135: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b12c7032-3338-4a5c-8af5-d7b5ff347237" in namespace "downward-api-876" to be "Succeeded or Failed" Oct 23 01:37:14.137: INFO: Pod "downwardapi-volume-b12c7032-3338-4a5c-8af5-d7b5ff347237": Phase="Pending", Reason="", readiness=false. Elapsed: 1.927872ms Oct 23 01:37:16.140: INFO: Pod "downwardapi-volume-b12c7032-3338-4a5c-8af5-d7b5ff347237": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00485967s Oct 23 01:37:18.144: INFO: Pod "downwardapi-volume-b12c7032-3338-4a5c-8af5-d7b5ff347237": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008987987s Oct 23 01:37:20.146: INFO: Pod "downwardapi-volume-b12c7032-3338-4a5c-8af5-d7b5ff347237": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.011477549s STEP: Saw pod success Oct 23 01:37:20.146: INFO: Pod "downwardapi-volume-b12c7032-3338-4a5c-8af5-d7b5ff347237" satisfied condition "Succeeded or Failed" Oct 23 01:37:20.149: INFO: Trying to get logs from node node2 pod downwardapi-volume-b12c7032-3338-4a5c-8af5-d7b5ff347237 container client-container: STEP: delete the pod Oct 23 01:37:20.184: INFO: Waiting for pod downwardapi-volume-b12c7032-3338-4a5c-8af5-d7b5ff347237 to disappear Oct 23 01:37:20.186: INFO: Pod downwardapi-volume-b12c7032-3338-4a5c-8af5-d7b5ff347237 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:37:20.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-876" for this suite. • [SLOW TEST:6.091 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":589,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:37:19.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 01:37:19.527: INFO: The status of Pod busybox-scheduling-144e518d-f781-4143-ad75-e2bd2ffd4493 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:37:21.529: INFO: The status of Pod busybox-scheduling-144e518d-f781-4143-ad75-e2bd2ffd4493 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:37:23.530: INFO: The status of Pod busybox-scheduling-144e518d-f781-4143-ad75-e2bd2ffd4493 is Running (Ready = true) [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:37:23.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7634" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":334,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:37:18.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Oct 23 01:37:18.772: INFO: Waiting up to 5m0s for pod "downwardapi-volume-78a6cc86-a39d-47aa-b040-7e50a661aa15" in namespace "downward-api-7708" to be "Succeeded or Failed" Oct 23 01:37:18.775: INFO: Pod "downwardapi-volume-78a6cc86-a39d-47aa-b040-7e50a661aa15": Phase="Pending", Reason="", readiness=false. Elapsed: 2.452965ms Oct 23 01:37:20.778: INFO: Pod "downwardapi-volume-78a6cc86-a39d-47aa-b040-7e50a661aa15": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005464919s Oct 23 01:37:22.782: INFO: Pod "downwardapi-volume-78a6cc86-a39d-47aa-b040-7e50a661aa15": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010012514s Oct 23 01:37:24.786: INFO: Pod "downwardapi-volume-78a6cc86-a39d-47aa-b040-7e50a661aa15": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01410874s Oct 23 01:37:26.790: INFO: Pod "downwardapi-volume-78a6cc86-a39d-47aa-b040-7e50a661aa15": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.018086632s STEP: Saw pod success Oct 23 01:37:26.790: INFO: Pod "downwardapi-volume-78a6cc86-a39d-47aa-b040-7e50a661aa15" satisfied condition "Succeeded or Failed" Oct 23 01:37:26.793: INFO: Trying to get logs from node node2 pod downwardapi-volume-78a6cc86-a39d-47aa-b040-7e50a661aa15 container client-container: STEP: delete the pod Oct 23 01:37:26.814: INFO: Waiting for pod downwardapi-volume-78a6cc86-a39d-47aa-b040-7e50a661aa15 to disappear Oct 23 01:37:26.816: INFO: Pod downwardapi-volume-78a6cc86-a39d-47aa-b040-7e50a661aa15 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:37:26.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7708" for this suite. • [SLOW TEST:8.084 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":252,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:37:20.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should get a host IP [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating pod Oct 23 01:37:20.252: INFO: The status of Pod pod-hostip-d215e65f-d149-499f-9d12-4434d4191da1 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:37:22.255: INFO: The status of Pod pod-hostip-d215e65f-d149-499f-9d12-4434d4191da1 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:37:24.256: INFO: The status of Pod pod-hostip-d215e65f-d149-499f-9d12-4434d4191da1 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:37:26.256: INFO: The status of Pod pod-hostip-d215e65f-d149-499f-9d12-4434d4191da1 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:37:28.255: INFO: The status of Pod pod-hostip-d215e65f-d149-499f-9d12-4434d4191da1 is Running (Ready = true) Oct 23 01:37:28.260: INFO: Pod pod-hostip-d215e65f-d149-499f-9d12-4434d4191da1 has hostIP: 10.10.190.208 [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:37:28.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2993" for this suite. • [SLOW TEST:8.049 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should get a host IP [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should get a host IP [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":600,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:37:17.732: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 01:37:17.753: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Oct 23 01:37:26.292: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7945 --namespace=crd-publish-openapi-7945 create -f -' Oct 23 01:37:26.784: INFO: stderr: "" Oct 23 01:37:26.784: INFO: stdout: "e2e-test-crd-publish-openapi-717-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Oct 23 01:37:26.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7945 --namespace=crd-publish-openapi-7945 delete e2e-test-crd-publish-openapi-717-crds test-cr' Oct 23 01:37:26.936: INFO: stderr: "" Oct 23 01:37:26.936: INFO: stdout: "e2e-test-crd-publish-openapi-717-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Oct 23 01:37:26.936: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7945 --namespace=crd-publish-openapi-7945 apply -f -' Oct 23 01:37:27.247: INFO: stderr: "" Oct 23 01:37:27.247: INFO: stdout: "e2e-test-crd-publish-openapi-717-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Oct 23 01:37:27.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7945 --namespace=crd-publish-openapi-7945 delete e2e-test-crd-publish-openapi-717-crds test-cr' Oct 23 01:37:27.391: INFO: stderr: "" Oct 23 01:37:27.391: INFO: stdout: "e2e-test-crd-publish-openapi-717-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Oct 23 01:37:27.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7945 explain e2e-test-crd-publish-openapi-717-crds' Oct 23 01:37:27.775: INFO: stderr: "" Oct 23 01:37:27.775: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-717-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:37:31.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7945" for this suite. • [SLOW TEST:13.665 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":-1,"completed":28,"skipped":403,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:37:28.285: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars Oct 23 01:37:28.320: INFO: Waiting up to 5m0s for pod "downward-api-8ece2b64-a39d-45cb-b956-bab222acbca1" in namespace "downward-api-77" to be "Succeeded or Failed" Oct 23 01:37:28.323: INFO: Pod "downward-api-8ece2b64-a39d-45cb-b956-bab222acbca1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.580651ms Oct 23 01:37:30.326: INFO: Pod "downward-api-8ece2b64-a39d-45cb-b956-bab222acbca1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006787338s Oct 23 01:37:32.330: INFO: Pod "downward-api-8ece2b64-a39d-45cb-b956-bab222acbca1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010461944s STEP: Saw pod success Oct 23 01:37:32.330: INFO: Pod "downward-api-8ece2b64-a39d-45cb-b956-bab222acbca1" satisfied condition "Succeeded or Failed" Oct 23 01:37:32.334: INFO: Trying to get logs from node node1 pod downward-api-8ece2b64-a39d-45cb-b956-bab222acbca1 container dapi-container: STEP: delete the pod Oct 23 01:37:32.387: INFO: Waiting for pod downward-api-8ece2b64-a39d-45cb-b956-bab222acbca1 to disappear Oct 23 01:37:32.389: INFO: Pod downward-api-8ece2b64-a39d-45cb-b956-bab222acbca1 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:37:32.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-77" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":611,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:37:26.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 01:37:26.873: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Oct 23 01:37:35.424: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8153 --namespace=crd-publish-openapi-8153 create -f -' Oct 23 01:37:35.917: INFO: stderr: "" Oct 23 01:37:35.917: INFO: stdout: "e2e-test-crd-publish-openapi-1217-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Oct 23 01:37:35.917: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8153 --namespace=crd-publish-openapi-8153 delete e2e-test-crd-publish-openapi-1217-crds test-cr' Oct 23 01:37:36.089: INFO: stderr: "" Oct 23 01:37:36.089: INFO: stdout: "e2e-test-crd-publish-openapi-1217-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Oct 23 01:37:36.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8153 --namespace=crd-publish-openapi-8153 apply -f -' Oct 23 01:37:36.453: INFO: stderr: "" Oct 23 01:37:36.453: INFO: stdout: "e2e-test-crd-publish-openapi-1217-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Oct 23 01:37:36.454: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8153 --namespace=crd-publish-openapi-8153 delete e2e-test-crd-publish-openapi-1217-crds test-cr' Oct 23 01:37:36.610: INFO: stderr: "" Oct 23 01:37:36.610: INFO: stdout: "e2e-test-crd-publish-openapi-1217-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Oct 23 01:37:36.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8153 explain e2e-test-crd-publish-openapi-1217-crds' Oct 23 01:37:36.950: INFO: stderr: "" Oct 23 01:37:36.950: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1217-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<>\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:37:40.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8153" for this suite. • [SLOW TEST:13.714 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":-1,"completed":20,"skipped":269,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":-1,"completed":20,"skipped":380,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:37:18.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. Oct 23 01:37:18.969: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:37:20.972: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:37:22.973: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:37:24.973: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:37:26.973: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the pod with lifecycle hook Oct 23 01:37:26.987: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:37:28.990: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:37:30.989: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = true) STEP: check poststart hook STEP: delete the pod with lifecycle hook Oct 23 01:37:31.008: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Oct 23 01:37:31.011: INFO: Pod pod-with-poststart-exec-hook still exists Oct 23 01:37:33.012: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Oct 23 01:37:33.014: INFO: Pod pod-with-poststart-exec-hook still exists Oct 23 01:37:35.011: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Oct 23 01:37:35.014: INFO: Pod pod-with-poststart-exec-hook still exists Oct 23 01:37:37.013: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Oct 23 01:37:37.015: INFO: Pod pod-with-poststart-exec-hook still exists Oct 23 01:37:39.012: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Oct 23 01:37:39.015: INFO: Pod pod-with-poststart-exec-hook still exists Oct 23 01:37:41.012: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Oct 23 01:37:41.016: INFO: Pod pod-with-poststart-exec-hook still exists Oct 23 01:37:43.012: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Oct 23 01:37:43.014: INFO: Pod pod-with-poststart-exec-hook still exists Oct 23 01:37:45.012: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Oct 23 01:37:45.014: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:37:45.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6782" for this suite. • [SLOW TEST:26.088 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":380,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:37:20.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: set up a multi version CRD Oct 23 01:37:20.070: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:37:45.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9130" for this suite. • [SLOW TEST:25.392 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":-1,"completed":16,"skipped":262,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:37:45.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap configmap-6788/configmap-test-b3663509-8a7b-4f11-aa33-34ab018ba4cf STEP: Creating a pod to test consume configMaps Oct 23 01:37:45.108: INFO: Waiting up to 5m0s for pod "pod-configmaps-172708ea-4747-4d36-83e6-90b227ff9e11" in namespace "configmap-6788" to be "Succeeded or Failed" Oct 23 01:37:45.113: INFO: Pod "pod-configmaps-172708ea-4747-4d36-83e6-90b227ff9e11": Phase="Pending", Reason="", readiness=false. Elapsed: 4.742204ms Oct 23 01:37:47.116: INFO: Pod "pod-configmaps-172708ea-4747-4d36-83e6-90b227ff9e11": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007735433s Oct 23 01:37:49.121: INFO: Pod "pod-configmaps-172708ea-4747-4d36-83e6-90b227ff9e11": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012172665s STEP: Saw pod success Oct 23 01:37:49.121: INFO: Pod "pod-configmaps-172708ea-4747-4d36-83e6-90b227ff9e11" satisfied condition "Succeeded or Failed" Oct 23 01:37:49.124: INFO: Trying to get logs from node node1 pod pod-configmaps-172708ea-4747-4d36-83e6-90b227ff9e11 container env-test: STEP: delete the pod Oct 23 01:37:50.024: INFO: Waiting for pod pod-configmaps-172708ea-4747-4d36-83e6-90b227ff9e11 to disappear Oct 23 01:37:50.025: INFO: Pod pod-configmaps-172708ea-4747-4d36-83e6-90b227ff9e11 no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:37:50.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6788" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":409,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:35:29.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod liveness-dee26e75-4741-4cb4-8805-e175506ec9c2 in namespace container-probe-3973 Oct 23 01:35:41.264: INFO: Started pod liveness-dee26e75-4741-4cb4-8805-e175506ec9c2 in namespace container-probe-3973 STEP: checking the pod's current state and verifying that restartCount is present Oct 23 01:35:41.266: INFO: Initial restart count of pod liveness-dee26e75-4741-4cb4-8805-e175506ec9c2 is 0 Oct 23 01:35:51.288: INFO: Restart count of pod container-probe-3973/liveness-dee26e75-4741-4cb4-8805-e175506ec9c2 is now 1 (10.021971378s elapsed) Oct 23 01:36:11.326: INFO: Restart count of pod container-probe-3973/liveness-dee26e75-4741-4cb4-8805-e175506ec9c2 is now 2 (30.05941344s elapsed) Oct 23 01:36:33.367: INFO: Restart count of pod container-probe-3973/liveness-dee26e75-4741-4cb4-8805-e175506ec9c2 is now 3 (52.100424562s elapsed) Oct 23 01:36:55.410: INFO: Restart count of pod container-probe-3973/liveness-dee26e75-4741-4cb4-8805-e175506ec9c2 is now 4 (1m14.143467434s elapsed) Oct 23 01:38:01.534: INFO: Restart count of pod container-probe-3973/liveness-dee26e75-4741-4cb4-8805-e175506ec9c2 is now 5 (2m20.267872319s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:38:01.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3973" for this suite. • [SLOW TEST:152.373 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":514,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:38:01.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create deployment with httpd image Oct 23 01:38:01.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9624 create -f -' Oct 23 01:38:02.043: INFO: stderr: "" Oct 23 01:38:02.043: INFO: stdout: "deployment.apps/httpd-deployment created\n" STEP: verify diff finds difference between live and declared image Oct 23 01:38:02.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9624 diff -f -' Oct 23 01:38:02.362: INFO: rc: 1 Oct 23 01:38:02.362: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9624 delete -f -' Oct 23 01:38:02.483: INFO: stderr: "" Oct 23 01:38:02.483: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:38:02.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9624" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":-1,"completed":30,"skipped":559,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:37:40.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-372.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-372.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-372.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-372.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 23 01:37:46.693: INFO: DNS probes using dns-test-475d712b-a228-458e-80e0-c2c35131c6af succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-372.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-372.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-372.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-372.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 23 01:37:56.736: INFO: DNS probes using dns-test-9bb06938-b388-4c86-ba4e-d3d3da841948 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-372.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-372.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-372.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-372.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 23 01:38:02.779: INFO: DNS probes using dns-test-99a07c59-5c05-4fde-b44e-0abb66a028e0 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:38:02.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-372" for this suite. • [SLOW TEST:22.169 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":-1,"completed":21,"skipped":311,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:38:02.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-67417559-09eb-43e9-986d-adf3e61b8817 STEP: Creating a pod to test consume configMaps Oct 23 01:38:02.540: INFO: Waiting up to 5m0s for pod "pod-configmaps-57fef859-7f98-49dc-8675-2eb95f6b096a" in namespace "configmap-2253" to be "Succeeded or Failed" Oct 23 01:38:02.544: INFO: Pod "pod-configmaps-57fef859-7f98-49dc-8675-2eb95f6b096a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.978429ms Oct 23 01:38:04.548: INFO: Pod "pod-configmaps-57fef859-7f98-49dc-8675-2eb95f6b096a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008225586s Oct 23 01:38:06.554: INFO: Pod "pod-configmaps-57fef859-7f98-49dc-8675-2eb95f6b096a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013486875s Oct 23 01:38:08.556: INFO: Pod "pod-configmaps-57fef859-7f98-49dc-8675-2eb95f6b096a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016472878s STEP: Saw pod success Oct 23 01:38:08.557: INFO: Pod "pod-configmaps-57fef859-7f98-49dc-8675-2eb95f6b096a" satisfied condition "Succeeded or Failed" Oct 23 01:38:08.559: INFO: Trying to get logs from node node2 pod pod-configmaps-57fef859-7f98-49dc-8675-2eb95f6b096a container agnhost-container: STEP: delete the pod Oct 23 01:38:08.573: INFO: Waiting for pod pod-configmaps-57fef859-7f98-49dc-8675-2eb95f6b096a to disappear Oct 23 01:38:08.575: INFO: Pod pod-configmaps-57fef859-7f98-49dc-8675-2eb95f6b096a no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:38:08.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2253" for this suite. • [SLOW TEST:6.080 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":565,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:37:50.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-3581 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-3581 STEP: creating replication controller externalsvc in namespace services-3581 I1023 01:37:50.104154 29 runners.go:190] Created replication controller with name: externalsvc, namespace: services-3581, replica count: 2 I1023 01:37:53.156093 29 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1023 01:37:56.156500 29 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Oct 23 01:37:56.169: INFO: Creating new exec pod Oct 23 01:38:00.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3581 exec execpodtg6rl -- /bin/sh -x -c nslookup clusterip-service.services-3581.svc.cluster.local' Oct 23 01:38:00.435: INFO: stderr: "+ nslookup clusterip-service.services-3581.svc.cluster.local\n" Oct 23 01:38:00.435: INFO: stdout: "Server:\t\t10.233.0.3\nAddress:\t10.233.0.3#53\n\nclusterip-service.services-3581.svc.cluster.local\tcanonical name = externalsvc.services-3581.svc.cluster.local.\nName:\texternalsvc.services-3581.svc.cluster.local\nAddress: 10.233.33.250\n\n" STEP: deleting ReplicationController externalsvc in namespace services-3581, will wait for the garbage collector to delete the pods Oct 23 01:38:00.493: INFO: Deleting ReplicationController externalsvc took: 4.87976ms Oct 23 01:38:00.594: INFO: Terminating ReplicationController externalsvc pods took: 100.688347ms Oct 23 01:38:14.304: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:38:14.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3581" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:24.252 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":-1,"completed":23,"skipped":426,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:38:08.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name cm-test-opt-del-9445ceb5-4f0f-4e79-ab76-4b4485980836 STEP: Creating configMap with name cm-test-opt-upd-b274a1ab-587d-4e53-ab12-96e1835ea360 STEP: Creating the pod Oct 23 01:38:08.684: INFO: The status of Pod pod-configmaps-c89af833-4946-47a1-9f56-d1180247ada1 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:38:10.689: INFO: The status of Pod pod-configmaps-c89af833-4946-47a1-9f56-d1180247ada1 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:38:12.688: INFO: The status of Pod pod-configmaps-c89af833-4946-47a1-9f56-d1180247ada1 is Running (Ready = true) STEP: Deleting configmap cm-test-opt-del-9445ceb5-4f0f-4e79-ab76-4b4485980836 STEP: Updating configmap cm-test-opt-upd-b274a1ab-587d-4e53-ab12-96e1835ea360 STEP: Creating configMap with name cm-test-opt-create-c9cb1aa9-6f6b-4ce3-8238-9f38dc6a86ca STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:38:14.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-998" for this suite. • [SLOW TEST:6.180 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":597,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:38:14.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 01:38:14.375: INFO: The status of Pod busybox-readonly-fs281bbf9e-74b9-41e5-8d2a-28454ace7ba8 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:38:16.379: INFO: The status of Pod busybox-readonly-fs281bbf9e-74b9-41e5-8d2a-28454ace7ba8 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:38:18.377: INFO: The status of Pod busybox-readonly-fs281bbf9e-74b9-41e5-8d2a-28454ace7ba8 is Running (Ready = true) [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:38:18.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8287" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":436,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:38:18.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:38:18.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-8745" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","total":-1,"completed":25,"skipped":509,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:38:02.820: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:38:18.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7178" for this suite. • [SLOW TEST:16.102 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:38:18.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 01:38:18.685: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Oct 23 01:38:20.715: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:38:21.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9513" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":-1,"completed":26,"skipped":563,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:38:21.791: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap configmap-6674/configmap-test-af86e7c6-9574-4fa6-8944-ed2c676bc153 STEP: Creating a pod to test consume configMaps Oct 23 01:38:21.829: INFO: Waiting up to 5m0s for pod "pod-configmaps-1a9769f5-a511-47c3-ae1e-2b1ced9e882a" in namespace "configmap-6674" to be "Succeeded or Failed" Oct 23 01:38:21.832: INFO: Pod "pod-configmaps-1a9769f5-a511-47c3-ae1e-2b1ced9e882a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.964906ms Oct 23 01:38:23.836: INFO: Pod "pod-configmaps-1a9769f5-a511-47c3-ae1e-2b1ced9e882a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00636131s Oct 23 01:38:25.840: INFO: Pod "pod-configmaps-1a9769f5-a511-47c3-ae1e-2b1ced9e882a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01087695s STEP: Saw pod success Oct 23 01:38:25.840: INFO: Pod "pod-configmaps-1a9769f5-a511-47c3-ae1e-2b1ced9e882a" satisfied condition "Succeeded or Failed" Oct 23 01:38:25.843: INFO: Trying to get logs from node node1 pod pod-configmaps-1a9769f5-a511-47c3-ae1e-2b1ced9e882a container env-test: STEP: delete the pod Oct 23 01:38:25.857: INFO: Waiting for pod pod-configmaps-1a9769f5-a511-47c3-ae1e-2b1ced9e882a to disappear Oct 23 01:38:25.859: INFO: Pod pod-configmaps-1a9769f5-a511-47c3-ae1e-2b1ced9e882a no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:38:25.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6674" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":597,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:38:25.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 01:38:25.923: INFO: Waiting up to 5m0s for pod "busybox-user-65534-367ca346-589a-40af-b71a-84a87bc668e9" in namespace "security-context-test-68" to be "Succeeded or Failed" Oct 23 01:38:25.925: INFO: Pod "busybox-user-65534-367ca346-589a-40af-b71a-84a87bc668e9": Phase="Pending", Reason="", readiness=false. Elapsed: 1.902863ms Oct 23 01:38:27.928: INFO: Pod "busybox-user-65534-367ca346-589a-40af-b71a-84a87bc668e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004703897s Oct 23 01:38:29.931: INFO: Pod "busybox-user-65534-367ca346-589a-40af-b71a-84a87bc668e9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007844454s Oct 23 01:38:31.936: INFO: Pod "busybox-user-65534-367ca346-589a-40af-b71a-84a87bc668e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01280015s Oct 23 01:38:31.936: INFO: Pod "busybox-user-65534-367ca346-589a-40af-b71a-84a87bc668e9" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:38:31.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-68" for this suite. • [SLOW TEST:6.053 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsUser /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:50 should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":610,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SS ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":-1,"completed":22,"skipped":320,"failed":0} [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:38:18.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:38:32.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1180" for this suite. • [SLOW TEST:14.038 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":-1,"completed":23,"skipped":320,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:38:31.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-020c344c-50f8-43a3-bac3-1e836ec948c0 STEP: Creating a pod to test consume secrets Oct 23 01:38:31.987: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b7b72087-0912-4bfe-8604-d53d3812a396" in namespace "projected-5108" to be "Succeeded or Failed" Oct 23 01:38:31.991: INFO: Pod "pod-projected-secrets-b7b72087-0912-4bfe-8604-d53d3812a396": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017758ms Oct 23 01:38:33.994: INFO: Pod "pod-projected-secrets-b7b72087-0912-4bfe-8604-d53d3812a396": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007460569s Oct 23 01:38:36.000: INFO: Pod "pod-projected-secrets-b7b72087-0912-4bfe-8604-d53d3812a396": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013088119s Oct 23 01:38:38.002: INFO: Pod "pod-projected-secrets-b7b72087-0912-4bfe-8604-d53d3812a396": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01564363s STEP: Saw pod success Oct 23 01:38:38.002: INFO: Pod "pod-projected-secrets-b7b72087-0912-4bfe-8604-d53d3812a396" satisfied condition "Succeeded or Failed" Oct 23 01:38:38.005: INFO: Trying to get logs from node node2 pod pod-projected-secrets-b7b72087-0912-4bfe-8604-d53d3812a396 container projected-secret-volume-test: STEP: delete the pod Oct 23 01:38:38.016: INFO: Waiting for pod pod-projected-secrets-b7b72087-0912-4bfe-8604-d53d3812a396 to disappear Oct 23 01:38:38.018: INFO: Pod pod-projected-secrets-b7b72087-0912-4bfe-8604-d53d3812a396 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:38:38.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5108" for this suite. • [SLOW TEST:6.074 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:38:33.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod Oct 23 01:38:33.093: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:38:42.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7441" for this suite. • [SLOW TEST:9.307 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should invoke init containers on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":-1,"completed":24,"skipped":365,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:37:20.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with configMap that has name projected-configmap-test-upd-cbb7e93d-904b-4cb6-b514-6e36735f80c1 STEP: Creating the pod Oct 23 01:37:20.152: INFO: The status of Pod pod-projected-configmaps-72d24e9a-bc17-4a3f-a2cc-cc5d3da8221d is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:37:22.155: INFO: The status of Pod pod-projected-configmaps-72d24e9a-bc17-4a3f-a2cc-cc5d3da8221d is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:37:24.157: INFO: The status of Pod pod-projected-configmaps-72d24e9a-bc17-4a3f-a2cc-cc5d3da8221d is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:37:26.156: INFO: The status of Pod pod-projected-configmaps-72d24e9a-bc17-4a3f-a2cc-cc5d3da8221d is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:37:28.154: INFO: The status of Pod pod-projected-configmaps-72d24e9a-bc17-4a3f-a2cc-cc5d3da8221d is Running (Ready = true) STEP: Updating configmap projected-configmap-test-upd-cbb7e93d-904b-4cb6-b514-6e36735f80c1 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:38:42.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-445" for this suite. • [SLOW TEST:82.681 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":612,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:38:38.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 23 01:38:38.286: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created Oct 23 01:38:40.295: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549918, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549918, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549918, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549918, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 23 01:38:43.306: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:38:43.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6671" for this suite. STEP: Destroying namespace "webhook-6671-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.348 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":-1,"completed":30,"skipped":612,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:36:18.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-5895 STEP: creating service affinity-nodeport in namespace services-5895 STEP: creating replication controller affinity-nodeport in namespace services-5895 I1023 01:36:18.628567 25 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-5895, replica count: 3 I1023 01:36:21.680247 25 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1023 01:36:24.680712 25 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1023 01:36:27.681925 25 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 23 01:36:27.693: INFO: Creating new exec pod Oct 23 01:36:34.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80' Oct 23 01:36:35.023: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport 80\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\n" Oct 23 01:36:35.023: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Oct 23 01:36:35.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.30.213 80' Oct 23 01:36:35.572: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.30.213 80\nConnection to 10.233.30.213 80 port [tcp/http] succeeded!\n" Oct 23 01:36:35.572: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Oct 23 01:36:35.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:36:35.962: INFO: rc: 1 Oct 23 01:36:35.962: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:36.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:36:37.851: INFO: rc: 1 Oct 23 01:36:37.851: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:37.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:36:38.690: INFO: rc: 1 Oct 23 01:36:38.690: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:38.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:36:39.381: INFO: rc: 1 Oct 23 01:36:39.381: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:39.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:36:40.669: INFO: rc: 1 Oct 23 01:36:40.669: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:40.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:36:41.498: INFO: rc: 1 Oct 23 01:36:41.498: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:41.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:36:42.340: INFO: rc: 1 Oct 23 01:36:42.340: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:42.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:36:43.273: INFO: rc: 1 Oct 23 01:36:43.273: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:43.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:36:44.237: INFO: rc: 1 Oct 23 01:36:44.237: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:44.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:36:45.235: INFO: rc: 1 Oct 23 01:36:45.235: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30625 + echo hostName nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:45.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:36:46.229: INFO: rc: 1 Oct 23 01:36:46.229: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:46.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:36:47.201: INFO: rc: 1 Oct 23 01:36:47.201: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:47.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:36:48.334: INFO: rc: 1 Oct 23 01:36:48.335: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:48.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:36:50.104: INFO: rc: 1 Oct 23 01:36:50.104: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:50.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:36:51.957: INFO: rc: 1 Oct 23 01:36:51.957: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:51.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:36:52.298: INFO: rc: 1 Oct 23 01:36:52.298: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:52.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:36:53.591: INFO: rc: 1 Oct 23 01:36:53.591: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:53.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:36:54.284: INFO: rc: 1 Oct 23 01:36:54.284: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:54.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:36:55.370: INFO: rc: 1 Oct 23 01:36:55.370: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30625 + echo hostName nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:55.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:36:56.224: INFO: rc: 1 Oct 23 01:36:56.224: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:56.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:36:57.589: INFO: rc: 1 Oct 23 01:36:57.589: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:57.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:36:58.257: INFO: rc: 1 Oct 23 01:36:58.257: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:58.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:36:59.223: INFO: rc: 1 Oct 23 01:36:59.223: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:59.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:37:00.736: INFO: rc: 1 Oct 23 01:37:00.736: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:00.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:37:01.654: INFO: rc: 1 Oct 23 01:37:01.654: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:01.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:37:02.270: INFO: rc: 1 Oct 23 01:37:02.270: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:02.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:37:03.545: INFO: rc: 1 Oct 23 01:37:03.545: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:03.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:37:04.440: INFO: rc: 1 Oct 23 01:37:04.440: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:04.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:37:05.362: INFO: rc: 1 Oct 23 01:37:05.362: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:05.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:37:07.031: INFO: rc: 1 Oct 23 01:37:07.031: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:07.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:37:08.629: INFO: rc: 1 Oct 23 01:37:08.629: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:08.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:37:09.300: INFO: rc: 1 Oct 23 01:37:09.300: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:09.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:37:10.441: INFO: rc: 1 Oct 23 01:37:10.441: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:10.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:37:11.313: INFO: rc: 1 Oct 23 01:37:11.313: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:11.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:37:12.239: INFO: rc: 1 Oct 23 01:37:12.239: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:12.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:37:13.578: INFO: rc: 1 Oct 23 01:37:13.578: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:13.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:37:14.242: INFO: rc: 1 Oct 23 01:37:14.242: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + + echonc hostName -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:14.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:37:15.233: INFO: rc: 1 Oct 23 01:37:15.234: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:15.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:37:16.364: INFO: rc: 1 Oct 23 01:37:16.364: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:16.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:37:17.231: INFO: rc: 1 Oct 23 01:37:17.231: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:17.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:37:18.221: INFO: rc: 1 Oct 23 01:37:18.222: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:18.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:37:19.703: INFO: rc: 1 Oct 23 01:37:19.703: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:19.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:37:20.424: INFO: rc: 1 Oct 23 01:37:20.424: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:20.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:37:21.225: INFO: rc: 1 Oct 23 01:37:21.226: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:21.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:37:22.403: INFO: rc: 1 Oct 23 01:37:22.404: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:22.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:37:23.364: INFO: rc: 1 Oct 23 01:37:23.365: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:23.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:37:24.789: INFO: rc: 1 Oct 23 01:37:24.789: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:24.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:37:25.286: INFO: rc: 1 Oct 23 01:37:25.286: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:25.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:37:26.207: INFO: rc: 1 Oct 23 01:37:26.207: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:26.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:37:27.180: INFO: rc: 1 Oct 23 01:37:27.180: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:27.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:37:28.200: INFO: rc: 1 Oct 23 01:37:28.200: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:28.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:37:29.193: INFO: rc: 1 Oct 23 01:37:29.193: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:29.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:37:30.204: INFO: rc: 1 Oct 23 01:37:30.205: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:30.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:37:31.217: INFO: rc: 1 Oct 23 01:37:31.217: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:31.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:37:32.214: INFO: rc: 1 Oct 23 01:37:32.214: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:32.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:37:33.218: INFO: rc: 1 Oct 23 01:37:33.218: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:33.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:37:34.343: INFO: rc: 1 Oct 23 01:37:34.343: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:34.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:37:35.647: INFO: rc: 1 Oct 23 01:37:35.647: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:35.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:37:36.250: INFO: rc: 1 Oct 23 01:37:36.250: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:36.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:37:37.353: INFO: rc: 1 Oct 23 01:37:37.353: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:37.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:37:38.330: INFO: rc: 1 Oct 23 01:37:38.330: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:38.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:37:39.480: INFO: rc: 1 Oct 23 01:37:39.480: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:39.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:37:40.209: INFO: rc: 1 Oct 23 01:37:40.209: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:40.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:37:41.229: INFO: rc: 1 Oct 23 01:37:41.229: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName+ nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:41.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:37:42.561: INFO: rc: 1 Oct 23 01:37:42.561: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:42.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:37:43.227: INFO: rc: 1 Oct 23 01:37:43.227: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:43.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:37:44.218: INFO: rc: 1 Oct 23 01:37:44.218: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:44.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:37:45.212: INFO: rc: 1 Oct 23 01:37:45.212: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:45.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:37:46.213: INFO: rc: 1 Oct 23 01:37:46.214: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:46.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:37:47.265: INFO: rc: 1 Oct 23 01:37:47.265: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:47.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:37:48.486: INFO: rc: 1 Oct 23 01:37:48.486: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:48.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:37:50.001: INFO: rc: 1 Oct 23 01:37:50.001: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:50.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:37:51.525: INFO: rc: 1 Oct 23 01:37:51.525: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:51.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:37:52.540: INFO: rc: 1 Oct 23 01:37:52.540: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:52.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:37:53.406: INFO: rc: 1 Oct 23 01:37:53.406: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:53.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:37:54.384: INFO: rc: 1 Oct 23 01:37:54.384: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:54.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:37:55.216: INFO: rc: 1 Oct 23 01:37:55.216: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:55.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:37:56.211: INFO: rc: 1 Oct 23 01:37:56.212: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:56.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:37:57.225: INFO: rc: 1 Oct 23 01:37:57.225: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:57.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:37:58.381: INFO: rc: 1 Oct 23 01:37:58.381: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:58.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:37:59.321: INFO: rc: 1 Oct 23 01:37:59.321: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:59.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:38:00.224: INFO: rc: 1 Oct 23 01:38:00.224: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:38:00.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:38:01.214: INFO: rc: 1 Oct 23 01:38:01.214: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:38:01.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:38:02.282: INFO: rc: 1 Oct 23 01:38:02.282: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:38:02.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:38:03.798: INFO: rc: 1 Oct 23 01:38:03.798: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:38:03.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:38:04.447: INFO: rc: 1 Oct 23 01:38:04.447: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:38:04.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:38:05.338: INFO: rc: 1 Oct 23 01:38:05.338: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:38:05.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:38:06.262: INFO: rc: 1 Oct 23 01:38:06.262: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:38:06.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:38:07.219: INFO: rc: 1 Oct 23 01:38:07.219: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:38:07.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:38:08.223: INFO: rc: 1 Oct 23 01:38:08.223: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:38:08.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:38:09.439: INFO: rc: 1 Oct 23 01:38:09.439: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:38:09.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:38:10.281: INFO: rc: 1 Oct 23 01:38:10.281: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:38:10.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:38:11.215: INFO: rc: 1 Oct 23 01:38:11.215: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:38:11.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:38:12.222: INFO: rc: 1 Oct 23 01:38:12.222: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:38:12.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:38:13.207: INFO: rc: 1 Oct 23 01:38:13.207: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:38:13.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:38:14.216: INFO: rc: 1 Oct 23 01:38:14.216: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:38:14.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:38:15.218: INFO: rc: 1 Oct 23 01:38:15.218: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:38:15.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:38:16.218: INFO: rc: 1 Oct 23 01:38:16.218: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:38:16.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:38:17.198: INFO: rc: 1 Oct 23 01:38:17.198: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:38:17.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:38:18.215: INFO: rc: 1 Oct 23 01:38:18.215: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:38:18.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:38:20.052: INFO: rc: 1 Oct 23 01:38:20.052: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:38:20.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:38:21.507: INFO: rc: 1 Oct 23 01:38:21.507: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:38:21.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:38:22.532: INFO: rc: 1 Oct 23 01:38:22.532: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:38:22.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:38:23.551: INFO: rc: 1 Oct 23 01:38:23.551: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:38:23.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:38:24.200: INFO: rc: 1 Oct 23 01:38:24.200: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:38:24.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:38:25.215: INFO: rc: 1 Oct 23 01:38:25.215: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:38:25.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:38:26.220: INFO: rc: 1 Oct 23 01:38:26.220: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:38:26.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:38:27.479: INFO: rc: 1 Oct 23 01:38:27.479: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:38:27.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:38:28.437: INFO: rc: 1 Oct 23 01:38:28.437: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:38:28.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:38:29.233: INFO: rc: 1 Oct 23 01:38:29.233: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:38:29.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:38:30.249: INFO: rc: 1 Oct 23 01:38:30.249: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:38:30.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:38:31.234: INFO: rc: 1 Oct 23 01:38:31.234: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:38:31.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:38:32.212: INFO: rc: 1 Oct 23 01:38:32.212: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:38:32.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:38:33.211: INFO: rc: 1 Oct 23 01:38:33.211: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:38:33.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:38:34.185: INFO: rc: 1 Oct 23 01:38:34.186: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:38:34.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:38:35.213: INFO: rc: 1 Oct 23 01:38:35.213: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:38:35.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:38:36.567: INFO: rc: 1 Oct 23 01:38:36.567: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:38:36.567: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625' Oct 23 01:38:36.901: INFO: rc: 1 Oct 23 01:38:36.901: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5895 exec execpod-affinitypgknc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30625: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30625 nc: connect to 10.10.190.207 port 30625 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:38:36.902: FAIL: Unexpected error: <*errors.errorString | 0xc004038350>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30625 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30625 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithOptionalTransition(0xc0006b1b80, 0x779f8f8, 0xc0015eaf20, 0xc0018ce000, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2572 +0x625 k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBService(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2531 k8s.io/kubernetes/test/e2e/network.glob..func24.25() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1829 +0xa5 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000703e00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc000703e00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc000703e00, 0x70e7b58) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 Oct 23 01:38:36.903: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport in namespace services-5895, will wait for the garbage collector to delete the pods Oct 23 01:38:36.969: INFO: Deleting ReplicationController affinity-nodeport took: 4.256811ms Oct 23 01:38:37.069: INFO: Terminating ReplicationController affinity-nodeport pods took: 100.448031ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-5895". STEP: Found 27 events. Oct 23 01:38:42.486: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-44w4f: { } Scheduled: Successfully assigned services-5895/affinity-nodeport-44w4f to node1 Oct 23 01:38:42.486: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-6nj64: { } Scheduled: Successfully assigned services-5895/affinity-nodeport-6nj64 to node1 Oct 23 01:38:42.486: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-chcs4: { } Scheduled: Successfully assigned services-5895/affinity-nodeport-chcs4 to node1 Oct 23 01:38:42.486: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpod-affinitypgknc: { } Scheduled: Successfully assigned services-5895/execpod-affinitypgknc to node2 Oct 23 01:38:42.486: INFO: At 2021-10-23 01:36:18 +0000 UTC - event for affinity-nodeport: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-chcs4 Oct 23 01:38:42.486: INFO: At 2021-10-23 01:36:18 +0000 UTC - event for affinity-nodeport: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-6nj64 Oct 23 01:38:42.486: INFO: At 2021-10-23 01:36:18 +0000 UTC - event for affinity-nodeport: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-44w4f Oct 23 01:38:42.486: INFO: At 2021-10-23 01:36:22 +0000 UTC - event for affinity-nodeport-44w4f: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 23 01:38:42.486: INFO: At 2021-10-23 01:36:22 +0000 UTC - event for affinity-nodeport-6nj64: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 23 01:38:42.486: INFO: At 2021-10-23 01:36:22 +0000 UTC - event for affinity-nodeport-chcs4: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 308.619656ms Oct 23 01:38:42.486: INFO: At 2021-10-23 01:36:22 +0000 UTC - event for affinity-nodeport-chcs4: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 23 01:38:42.486: INFO: At 2021-10-23 01:36:23 +0000 UTC - event for affinity-nodeport-6nj64: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 854.609149ms Oct 23 01:38:42.486: INFO: At 2021-10-23 01:36:23 +0000 UTC - event for affinity-nodeport-chcs4: {kubelet node1} Created: Created container affinity-nodeport Oct 23 01:38:42.486: INFO: At 2021-10-23 01:36:23 +0000 UTC - event for affinity-nodeport-chcs4: {kubelet node1} Started: Started container affinity-nodeport Oct 23 01:38:42.486: INFO: At 2021-10-23 01:36:24 +0000 UTC - event for affinity-nodeport-44w4f: {kubelet node1} Created: Created container affinity-nodeport Oct 23 01:38:42.486: INFO: At 2021-10-23 01:36:24 +0000 UTC - event for affinity-nodeport-44w4f: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 1.203717572s Oct 23 01:38:42.486: INFO: At 2021-10-23 01:36:24 +0000 UTC - event for affinity-nodeport-6nj64: {kubelet node1} Started: Started container affinity-nodeport Oct 23 01:38:42.486: INFO: At 2021-10-23 01:36:24 +0000 UTC - event for affinity-nodeport-6nj64: {kubelet node1} Created: Created container affinity-nodeport Oct 23 01:38:42.486: INFO: At 2021-10-23 01:36:25 +0000 UTC - event for affinity-nodeport-44w4f: {kubelet node1} Started: Started container affinity-nodeport Oct 23 01:38:42.486: INFO: At 2021-10-23 01:36:30 +0000 UTC - event for execpod-affinitypgknc: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 23 01:38:42.486: INFO: At 2021-10-23 01:36:31 +0000 UTC - event for execpod-affinitypgknc: {kubelet node2} Created: Created container agnhost-container Oct 23 01:38:42.486: INFO: At 2021-10-23 01:36:31 +0000 UTC - event for execpod-affinitypgknc: {kubelet node2} Started: Started container agnhost-container Oct 23 01:38:42.486: INFO: At 2021-10-23 01:36:31 +0000 UTC - event for execpod-affinitypgknc: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 463.048037ms Oct 23 01:38:42.486: INFO: At 2021-10-23 01:38:36 +0000 UTC - event for affinity-nodeport-44w4f: {kubelet node1} Killing: Stopping container affinity-nodeport Oct 23 01:38:42.486: INFO: At 2021-10-23 01:38:36 +0000 UTC - event for affinity-nodeport-6nj64: {kubelet node1} Killing: Stopping container affinity-nodeport Oct 23 01:38:42.486: INFO: At 2021-10-23 01:38:36 +0000 UTC - event for affinity-nodeport-chcs4: {kubelet node1} Killing: Stopping container affinity-nodeport Oct 23 01:38:42.486: INFO: At 2021-10-23 01:38:36 +0000 UTC - event for execpod-affinitypgknc: {kubelet node2} Killing: Stopping container agnhost-container Oct 23 01:38:42.488: INFO: POD NODE PHASE GRACE CONDITIONS Oct 23 01:38:42.488: INFO: Oct 23 01:38:42.493: INFO: Logging node info for node master1 Oct 23 01:38:42.496: INFO: Node Info: &Node{ObjectMeta:{master1 1b0e9b6c-fa73-4303-880f-3c662903b3ba 100829 0 2021-10-22 21:03:37 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-22 21:03:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2021-10-22 21:03:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2021-10-22 21:06:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2021-10-22 21:11:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:07 +0000 UTC,LastTransitionTime:2021-10-22 21:09:07 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 01:38:39 +0000 UTC,LastTransitionTime:2021-10-22 21:03:34 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 01:38:39 +0000 UTC,LastTransitionTime:2021-10-22 21:03:34 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 01:38:39 +0000 UTC,LastTransitionTime:2021-10-22 21:03:34 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 01:38:39 +0000 UTC,LastTransitionTime:2021-10-22 21:09:03 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:30ce143f9c9243b59253027a77cdbf77,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:e78651c4-73ca-42e7-8083-bc7c7ebac4b6,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:519ce66d3ef90d7545f5679b670aa50393adfbe9785a720ba26ce3ec4b263c5d tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:2012644549052fa07c43b0d19f320c871a25e105d0b23e33645e4f1bcf8fcd97 nginx:1.20.1-alpine],SizeBytes:22650454,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 01:38:42.496: INFO: Logging kubelet events for node master1 Oct 23 01:38:42.499: INFO: Logging pods the kubelet thinks is on node master1 Oct 23 01:38:42.526: INFO: kube-controller-manager-master1 started at 2021-10-22 21:12:54 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:42.526: INFO: Container kube-controller-manager ready: true, restart count 1 Oct 23 01:38:42.526: INFO: kube-proxy-fhqkt started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:42.526: INFO: Container kube-proxy ready: true, restart count 1 Oct 23 01:38:42.526: INFO: kube-flannel-8vnf2 started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 01:38:42.526: INFO: Init container install-cni ready: true, restart count 1 Oct 23 01:38:42.526: INFO: Container kube-flannel ready: true, restart count 1 Oct 23 01:38:42.526: INFO: kube-multus-ds-amd64-vl8qj started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:42.526: INFO: Container kube-multus ready: true, restart count 1 Oct 23 01:38:42.526: INFO: coredns-8474476ff8-q8d8x started at 2021-10-22 21:07:01 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:42.526: INFO: Container coredns ready: true, restart count 2 Oct 23 01:38:42.526: INFO: container-registry-65d7c44b96-wtz5j started at 2021-10-22 21:10:37 +0000 UTC (0+2 container statuses recorded) Oct 23 01:38:42.526: INFO: Container docker-registry ready: true, restart count 0 Oct 23 01:38:42.526: INFO: Container nginx ready: true, restart count 0 Oct 23 01:38:42.526: INFO: node-exporter-fxb7q started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 01:38:42.526: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 01:38:42.526: INFO: Container node-exporter ready: true, restart count 0 Oct 23 01:38:42.526: INFO: kube-apiserver-master1 started at 2021-10-22 21:09:03 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:42.526: INFO: Container kube-apiserver ready: true, restart count 0 Oct 23 01:38:42.526: INFO: kube-scheduler-master1 started at 2021-10-22 21:22:33 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:42.526: INFO: Container kube-scheduler ready: true, restart count 0 W1023 01:38:42.540692 25 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 01:38:42.608: INFO: Latency metrics for node master1 Oct 23 01:38:42.608: INFO: Logging node info for node master2 Oct 23 01:38:42.613: INFO: Node Info: &Node{ObjectMeta:{master2 48070097-b11c-473d-9240-f4ee02bd7e2f 100654 0 2021-10-22 21:04:08 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-22 21:04:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-10-22 21:17:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:14 +0000 UTC,LastTransitionTime:2021-10-22 21:09:14 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 01:38:33 +0000 UTC,LastTransitionTime:2021-10-22 21:04:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 01:38:33 +0000 UTC,LastTransitionTime:2021-10-22 21:04:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 01:38:33 +0000 UTC,LastTransitionTime:2021-10-22 21:04:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 01:38:33 +0000 UTC,LastTransitionTime:2021-10-22 21:06:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c5d510cf1060448cb87a1d02cd1f2972,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:8ec7c43d-60d2-4abb-84a1-5a37f3283118,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 01:38:42.613: INFO: Logging kubelet events for node master2 Oct 23 01:38:42.618: INFO: Logging pods the kubelet thinks is on node master2 Oct 23 01:38:42.633: INFO: kube-controller-manager-master2 started at 2021-10-22 21:12:54 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:42.633: INFO: Container kube-controller-manager ready: true, restart count 2 Oct 23 01:38:42.633: INFO: kube-scheduler-master2 started at 2021-10-22 21:12:54 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:42.633: INFO: Container kube-scheduler ready: true, restart count 2 Oct 23 01:38:42.633: INFO: kube-proxy-2xlf2 started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:42.633: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 01:38:42.633: INFO: kube-flannel-tfkj9 started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 01:38:42.633: INFO: Init container install-cni ready: true, restart count 2 Oct 23 01:38:42.633: INFO: Container kube-flannel ready: true, restart count 1 Oct 23 01:38:42.633: INFO: kube-multus-ds-amd64-m8ztc started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:42.633: INFO: Container kube-multus ready: true, restart count 1 Oct 23 01:38:42.633: INFO: kube-apiserver-master2 started at 2021-10-22 21:04:46 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:42.633: INFO: Container kube-apiserver ready: true, restart count 0 Oct 23 01:38:42.633: INFO: dns-autoscaler-7df78bfcfb-9ss69 started at 2021-10-22 21:06:58 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:42.633: INFO: Container autoscaler ready: true, restart count 1 Oct 23 01:38:42.633: INFO: node-exporter-vljkh started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 01:38:42.633: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 01:38:42.633: INFO: Container node-exporter ready: true, restart count 0 W1023 01:38:42.653630 25 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 01:38:42.723: INFO: Latency metrics for node master2 Oct 23 01:38:42.723: INFO: Logging node info for node master3 Oct 23 01:38:42.726: INFO: Node Info: &Node{ObjectMeta:{master3 fe22a467-e2de-4b64-9399-d274e6d13231 100819 0 2021-10-22 21:04:18 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-22 21:04:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-10-22 21:14:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2021-10-22 21:14:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:08 +0000 UTC,LastTransitionTime:2021-10-22 21:09:08 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 01:38:38 +0000 UTC,LastTransitionTime:2021-10-22 21:04:18 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 01:38:38 +0000 UTC,LastTransitionTime:2021-10-22 21:04:18 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 01:38:38 +0000 UTC,LastTransitionTime:2021-10-22 21:04:18 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 01:38:38 +0000 UTC,LastTransitionTime:2021-10-22 21:09:03 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:55ed55d7ecb94c5fbcecb32cb3747801,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:7e00baa8-f631-4d7e-baa1-cb915fbb1ea7,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 01:38:42.726: INFO: Logging kubelet events for node master3 Oct 23 01:38:42.729: INFO: Logging pods the kubelet thinks is on node master3 Oct 23 01:38:42.743: INFO: kube-apiserver-master3 started at 2021-10-22 21:09:03 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:42.743: INFO: Container kube-apiserver ready: true, restart count 0 Oct 23 01:38:42.743: INFO: kube-controller-manager-master3 started at 2021-10-22 21:09:03 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:42.743: INFO: Container kube-controller-manager ready: true, restart count 2 Oct 23 01:38:42.743: INFO: kube-proxy-l7st4 started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:42.743: INFO: Container kube-proxy ready: true, restart count 1 Oct 23 01:38:42.743: INFO: kube-flannel-rf9mv started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 01:38:42.743: INFO: Init container install-cni ready: true, restart count 1 Oct 23 01:38:42.743: INFO: Container kube-flannel ready: true, restart count 1 Oct 23 01:38:42.743: INFO: node-feature-discovery-controller-cff799f9f-dgsfd started at 2021-10-22 21:14:11 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:42.743: INFO: Container nfd-controller ready: true, restart count 0 Oct 23 01:38:42.743: INFO: node-exporter-b22mw started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 01:38:42.743: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 01:38:42.743: INFO: Container node-exporter ready: true, restart count 0 Oct 23 01:38:42.743: INFO: kube-scheduler-master3 started at 2021-10-22 21:04:46 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:42.743: INFO: Container kube-scheduler ready: true, restart count 2 Oct 23 01:38:42.743: INFO: kube-multus-ds-amd64-tfbmd started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:42.743: INFO: Container kube-multus ready: true, restart count 1 Oct 23 01:38:42.744: INFO: coredns-8474476ff8-7wlfp started at 2021-10-22 21:06:56 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:42.744: INFO: Container coredns ready: true, restart count 2 W1023 01:38:42.759289 25 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 01:38:42.842: INFO: Latency metrics for node master3 Oct 23 01:38:42.842: INFO: Logging node info for node node1 Oct 23 01:38:42.844: INFO: Node Info: &Node{ObjectMeta:{node1 1c590bf6-8845-4681-8fa1-7acc55183d29 100853 0 2021-10-22 21:05:23 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-10-22 21:05:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-10-22 21:05:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-22 21:14:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-22 21:17:46 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-10-23 01:19:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:10 +0000 UTC,LastTransitionTime:2021-10-22 21:09:10 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 01:38:41 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 01:38:41 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 01:38:41 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 01:38:41 +0000 UTC,LastTransitionTime:2021-10-22 21:06:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f11a4b4c58ac4a4eb06ac043edeefa84,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:50e64d70-ffd2-496a-957a-81f1931a6b6e,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003429679,},ContainerImage{Names:[localhost:30500/cmk@sha256:ba2eda55192ece5488254511709b932e8a99f600af8261a9f2a89d0dbc9b8fec cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:c3256608afd18299ac7559d97ec0a80149d265b35d2eeeb33a053826e486886a nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:a296b4c6f6ee2b88f095b61e95c7ef4f51ba25598835b4978c9256d8c8ace48a alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 01:38:42.846: INFO: Logging kubelet events for node node1 Oct 23 01:38:42.848: INFO: Logging pods the kubelet thinks is on node node1 Oct 23 01:38:42.868: INFO: ss-1 started at 2021-10-23 01:38:16 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:42.868: INFO: Container webserver ready: false, restart count 0 Oct 23 01:38:42.868: INFO: kube-proxy-m9z8s started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:42.868: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 01:38:42.868: INFO: kube-multus-ds-amd64-l97s4 started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:42.868: INFO: Container kube-multus ready: true, restart count 1 Oct 23 01:38:42.868: INFO: node-feature-discovery-worker-2pvq5 started at 2021-10-22 21:14:11 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:42.868: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 01:38:42.868: INFO: cmk-init-discover-node1-c599w started at 2021-10-22 21:17:43 +0000 UTC (0+3 container statuses recorded) Oct 23 01:38:42.868: INFO: Container discover ready: false, restart count 0 Oct 23 01:38:42.868: INFO: Container init ready: false, restart count 0 Oct 23 01:38:42.868: INFO: Container install ready: false, restart count 0 Oct 23 01:38:42.868: INFO: cmk-t9r2t started at 2021-10-22 21:18:25 +0000 UTC (0+2 container statuses recorded) Oct 23 01:38:42.868: INFO: Container nodereport ready: true, restart count 0 Oct 23 01:38:42.868: INFO: Container reconcile ready: true, restart count 0 Oct 23 01:38:42.868: INFO: pod-projected-configmaps-2a59c82c-f11a-40e6-a8b8-fd3987c772a3 started at (0+0 container statuses recorded) Oct 23 01:38:42.868: INFO: nginx-proxy-node1 started at 2021-10-22 21:05:23 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:42.868: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 01:38:42.868: INFO: simpletest.rc-99m8l started at 2021-10-23 01:37:32 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:42.868: INFO: Container nginx ready: true, restart count 0 Oct 23 01:38:42.868: INFO: pod-configmaps-71668d76-7d7c-4bb8-b70e-b9a0891edd52 started at 2021-10-23 01:37:23 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:42.868: INFO: Container agnhost-container ready: true, restart count 0 Oct 23 01:38:42.868: INFO: ss-0 started at 2021-10-23 01:37:45 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:42.868: INFO: Container webserver ready: false, restart count 0 Oct 23 01:38:42.868: INFO: kube-flannel-2cdvd started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 01:38:42.868: INFO: Init container install-cni ready: true, restart count 2 Oct 23 01:38:42.868: INFO: Container kube-flannel ready: true, restart count 3 Oct 23 01:38:42.868: INFO: kubernetes-metrics-scraper-5558854cb-dfn2n started at 2021-10-22 21:07:01 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:42.868: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 23 01:38:42.868: INFO: node-exporter-v656r started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 01:38:42.868: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 01:38:42.868: INFO: Container node-exporter ready: true, restart count 0 Oct 23 01:38:42.868: INFO: affinity-nodeport-timeout-r4j9h started at 2021-10-23 01:36:21 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:42.868: INFO: Container affinity-nodeport-timeout ready: false, restart count 0 Oct 23 01:38:42.868: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-sjjtd started at 2021-10-22 21:15:26 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:42.868: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 01:38:42.868: INFO: simpletest.rc-d49qr started at 2021-10-23 01:37:32 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:42.868: INFO: Container nginx ready: true, restart count 0 Oct 23 01:38:42.868: INFO: kubernetes-dashboard-785dcbb76d-kc4kh started at 2021-10-22 21:07:01 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:42.868: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 23 01:38:42.868: INFO: prometheus-k8s-0 started at 2021-10-22 21:19:48 +0000 UTC (0+4 container statuses recorded) Oct 23 01:38:42.869: INFO: Container config-reloader ready: true, restart count 0 Oct 23 01:38:42.869: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 23 01:38:42.869: INFO: Container grafana ready: true, restart count 0 Oct 23 01:38:42.869: INFO: Container prometheus ready: true, restart count 1 Oct 23 01:38:42.869: INFO: collectd-n9sbv started at 2021-10-22 21:23:20 +0000 UTC (0+3 container statuses recorded) Oct 23 01:38:42.869: INFO: Container collectd ready: true, restart count 0 Oct 23 01:38:42.869: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 01:38:42.869: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 01:38:42.869: INFO: prometheus-operator-585ccfb458-hwjk2 started at 2021-10-22 21:19:21 +0000 UTC (0+2 container statuses recorded) Oct 23 01:38:42.869: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 01:38:42.869: INFO: Container prometheus-operator ready: true, restart count 0 Oct 23 01:38:42.869: INFO: simpletest.rc-95b8r started at 2021-10-23 01:37:32 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:42.869: INFO: Container nginx ready: true, restart count 0 Oct 23 01:38:42.869: INFO: simpletest.rc-292vs started at 2021-10-23 01:37:32 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:42.869: INFO: Container nginx ready: true, restart count 0 W1023 01:38:42.884779 25 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 01:38:43.133: INFO: Latency metrics for node node1 Oct 23 01:38:43.133: INFO: Logging node info for node node2 Oct 23 01:38:43.138: INFO: Node Info: &Node{ObjectMeta:{node2 bdba54c1-d4eb-4c09-a343-50f320ccb048 100741 0 2021-10-22 21:05:23 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-10-22 21:05:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-10-22 21:05:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-22 21:14:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-22 21:18:08 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-10-23 01:20:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:example.com/fakecpu":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {e2e.test Update v1 2021-10-23 01:28:00 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:08 +0000 UTC,LastTransitionTime:2021-10-22 21:09:08 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 01:38:37 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 01:38:37 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 01:38:37 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 01:38:37 +0000 UTC,LastTransitionTime:2021-10-22 21:06:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:82312646736a4d47a5e2182417308818,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:045f38e2-ca45-4931-a8ac-a14f5e34cbd2,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[localhost:30500/cmk@sha256:ba2eda55192ece5488254511709b932e8a99f600af8261a9f2a89d0dbc9b8fec localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:c3256608afd18299ac7559d97ec0a80149d265b35d2eeeb33a053826e486886a localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[localhost:30500/tasextender@sha256:519ce66d3ef90d7545f5679b670aa50393adfbe9785a720ba26ce3ec4b263c5d localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 01:38:43.138: INFO: Logging kubelet events for node node2 Oct 23 01:38:43.141: INFO: Logging pods the kubelet thinks is on node node2 Oct 23 01:38:43.163: INFO: nginx-proxy-node2 started at 2021-10-22 21:05:23 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:43.163: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 01:38:43.163: INFO: busybox-readonly-fs281bbf9e-74b9-41e5-8d2a-28454ace7ba8 started at 2021-10-23 01:38:14 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:43.163: INFO: Container busybox-readonly-fs281bbf9e-74b9-41e5-8d2a-28454ace7ba8 ready: true, restart count 0 Oct 23 01:38:43.163: INFO: cmk-kn29k started at 2021-10-22 21:18:25 +0000 UTC (0+2 container statuses recorded) Oct 23 01:38:43.163: INFO: Container nodereport ready: true, restart count 1 Oct 23 01:38:43.163: INFO: Container reconcile ready: true, restart count 0 Oct 23 01:38:43.163: INFO: affinity-nodeport-timeout-db4ph started at 2021-10-23 01:36:21 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:43.163: INFO: Container affinity-nodeport-timeout ready: false, restart count 0 Oct 23 01:38:43.163: INFO: simpletest.rc-nd8mp started at 2021-10-23 01:37:32 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:43.163: INFO: Container nginx ready: true, restart count 0 Oct 23 01:38:43.163: INFO: kube-proxy-5h2bl started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:43.163: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 01:38:43.163: INFO: cmk-init-discover-node2-2btnq started at 2021-10-22 21:18:03 +0000 UTC (0+3 container statuses recorded) Oct 23 01:38:43.163: INFO: Container discover ready: false, restart count 0 Oct 23 01:38:43.163: INFO: Container init ready: false, restart count 0 Oct 23 01:38:43.163: INFO: Container install ready: false, restart count 0 Oct 23 01:38:43.163: INFO: cmk-webhook-6c9d5f8578-pkwhc started at 2021-10-22 21:18:26 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:43.163: INFO: Container cmk-webhook ready: true, restart count 0 Oct 23 01:38:43.163: INFO: affinity-nodeport-timeout-brs4b started at 2021-10-23 01:36:21 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:43.163: INFO: Container affinity-nodeport-timeout ready: false, restart count 0 Oct 23 01:38:43.163: INFO: simpletest.rc-ghmq9 started at 2021-10-23 01:37:32 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:43.163: INFO: Container nginx ready: true, restart count 0 Oct 23 01:38:43.163: INFO: simpletest.rc-9vk5x started at 2021-10-23 01:37:32 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:43.163: INFO: Container nginx ready: true, restart count 0 Oct 23 01:38:43.163: INFO: simpletest.rc-bmrm9 started at 2021-10-23 01:37:32 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:43.163: INFO: Container nginx ready: true, restart count 0 Oct 23 01:38:43.163: INFO: sample-webhook-deployment-78988fc6cd-w7wvc started at 2021-10-23 01:38:38 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:43.163: INFO: Container sample-webhook ready: true, restart count 0 Oct 23 01:38:43.163: INFO: kube-multus-ds-amd64-fww5b started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:43.163: INFO: Container kube-multus ready: true, restart count 1 Oct 23 01:38:43.163: INFO: node-feature-discovery-worker-8k8m5 started at 2021-10-22 21:14:11 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:43.163: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 01:38:43.163: INFO: pod-projected-configmaps-72d24e9a-bc17-4a3f-a2cc-cc5d3da8221d started at 2021-10-23 01:37:20 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:43.163: INFO: Container agnhost-container ready: true, restart count 0 Oct 23 01:38:43.163: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zhcfq started at 2021-10-22 21:15:26 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:43.163: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 01:38:43.163: INFO: node-exporter-fjc79 started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 01:38:43.163: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 01:38:43.163: INFO: Container node-exporter ready: true, restart count 0 Oct 23 01:38:43.163: INFO: simpletest.rc-gncll started at 2021-10-23 01:37:32 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:43.163: INFO: Container nginx ready: true, restart count 0 Oct 23 01:38:43.163: INFO: ss-2 started at 2021-10-23 01:38:19 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:43.163: INFO: Container webserver ready: false, restart count 0 Oct 23 01:38:43.163: INFO: pod-init-54434288-d482-4b5d-8c60-3bd2f3580d47 started at 2021-10-23 01:38:33 +0000 UTC (2+1 container statuses recorded) Oct 23 01:38:43.163: INFO: Init container init1 ready: true, restart count 0 Oct 23 01:38:43.163: INFO: Init container init2 ready: true, restart count 0 Oct 23 01:38:43.163: INFO: Container run1 ready: true, restart count 0 Oct 23 01:38:43.163: INFO: simpletest.rc-mwbkf started at 2021-10-23 01:37:32 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:43.163: INFO: Container nginx ready: true, restart count 0 Oct 23 01:38:43.163: INFO: kube-flannel-xx6ls started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 01:38:43.163: INFO: Init container install-cni ready: true, restart count 1 Oct 23 01:38:43.163: INFO: Container kube-flannel ready: true, restart count 2 Oct 23 01:38:43.163: INFO: tas-telemetry-aware-scheduling-84ff454dfb-gltgg started at 2021-10-22 21:22:32 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:43.163: INFO: Container tas-extender ready: true, restart count 0 Oct 23 01:38:43.163: INFO: collectd-xhdgw started at 2021-10-22 21:23:20 +0000 UTC (0+3 container statuses recorded) Oct 23 01:38:43.163: INFO: Container collectd ready: true, restart count 0 Oct 23 01:38:43.163: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 01:38:43.163: INFO: Container rbac-proxy ready: true, restart count 0 W1023 01:38:43.176986 25 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 01:38:43.940: INFO: Latency metrics for node node2 Oct 23 01:38:43.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5895" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [145.353 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity work for NodePort service [LinuxOnly] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 01:38:36.902: Unexpected error: <*errors.errorString | 0xc004038350>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30625 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30625 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2572 ------------------------------ {"msg":"FAILED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":33,"skipped":623,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:36:16.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-4116 Oct 23 01:36:16.967: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:36:18.970: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:36:20.970: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) Oct 23 01:36:20.973: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Oct 23 01:36:21.241: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" Oct 23 01:36:21.241: INFO: stdout: "iptables" Oct 23 01:36:21.241: INFO: proxyMode: iptables Oct 23 01:36:21.250: INFO: Waiting for pod kube-proxy-mode-detector to disappear Oct 23 01:36:21.252: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-nodeport-timeout in namespace services-4116 STEP: creating replication controller affinity-nodeport-timeout in namespace services-4116 I1023 01:36:21.265734 22 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-4116, replica count: 3 I1023 01:36:24.317061 22 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1023 01:36:27.317466 22 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 23 01:36:27.326: INFO: Creating new exec pod Oct 23 01:36:32.348: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Oct 23 01:36:32.623: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" Oct 23 01:36:32.623: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Oct 23 01:36:32.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.20.13 80' Oct 23 01:36:32.878: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.20.13 80\nConnection to 10.233.20.13 80 port [tcp/http] succeeded!\n" Oct 23 01:36:32.878: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Oct 23 01:36:32.878: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:36:33.151: INFO: rc: 1 Oct 23 01:36:33.151: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:34.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:36:34.809: INFO: rc: 1 Oct 23 01:36:34.809: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:35.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:36:35.402: INFO: rc: 1 Oct 23 01:36:35.402: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:36.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:36:36.429: INFO: rc: 1 Oct 23 01:36:36.429: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:37.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:36:37.395: INFO: rc: 1 Oct 23 01:36:37.395: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:38.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:36:38.407: INFO: rc: 1 Oct 23 01:36:38.407: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:39.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:36:39.429: INFO: rc: 1 Oct 23 01:36:39.429: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:40.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:36:40.421: INFO: rc: 1 Oct 23 01:36:40.421: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:41.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:36:41.406: INFO: rc: 1 Oct 23 01:36:41.406: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:42.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:36:42.402: INFO: rc: 1 Oct 23 01:36:42.402: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:43.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:36:43.399: INFO: rc: 1 Oct 23 01:36:43.400: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:44.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:36:44.417: INFO: rc: 1 Oct 23 01:36:44.417: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:45.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:36:45.453: INFO: rc: 1 Oct 23 01:36:45.453: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:46.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:36:46.382: INFO: rc: 1 Oct 23 01:36:46.382: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31458 + echo hostName nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:47.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:36:47.399: INFO: rc: 1 Oct 23 01:36:47.399: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:48.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:36:48.465: INFO: rc: 1 Oct 23 01:36:48.465: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:49.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:36:52.848: INFO: rc: 1 Oct 23 01:36:52.848: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:53.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:36:54.158: INFO: rc: 1 Oct 23 01:36:54.158: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:55.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:36:55.547: INFO: rc: 1 Oct 23 01:36:55.547: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:56.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:36:56.974: INFO: rc: 1 Oct 23 01:36:56.975: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:57.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:36:57.974: INFO: rc: 1 Oct 23 01:36:57.974: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:58.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:36:59.117: INFO: rc: 1 Oct 23 01:36:59.117: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:36:59.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:36:59.592: INFO: rc: 1 Oct 23 01:36:59.592: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:00.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:37:00.459: INFO: rc: 1 Oct 23 01:37:00.459: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:01.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:37:01.794: INFO: rc: 1 Oct 23 01:37:01.795: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:02.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:37:02.406: INFO: rc: 1 Oct 23 01:37:02.406: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:03.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:37:03.410: INFO: rc: 1 Oct 23 01:37:03.410: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:04.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:37:04.429: INFO: rc: 1 Oct 23 01:37:04.429: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:05.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:37:05.383: INFO: rc: 1 Oct 23 01:37:05.383: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:06.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:37:06.413: INFO: rc: 1 Oct 23 01:37:06.413: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:07.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:37:07.416: INFO: rc: 1 Oct 23 01:37:07.416: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:08.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:37:08.719: INFO: rc: 1 Oct 23 01:37:08.719: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:09.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:37:09.957: INFO: rc: 1 Oct 23 01:37:09.957: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:10.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:37:10.433: INFO: rc: 1 Oct 23 01:37:10.433: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:11.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:37:11.391: INFO: rc: 1 Oct 23 01:37:11.391: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:12.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:37:12.544: INFO: rc: 1 Oct 23 01:37:12.544: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:13.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:37:13.448: INFO: rc: 1 Oct 23 01:37:13.448: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:14.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:37:14.502: INFO: rc: 1 Oct 23 01:37:14.502: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:15.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:37:15.813: INFO: rc: 1 Oct 23 01:37:15.813: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:16.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:37:16.442: INFO: rc: 1 Oct 23 01:37:16.442: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:17.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:37:17.400: INFO: rc: 1 Oct 23 01:37:17.400: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:18.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:37:18.380: INFO: rc: 1 Oct 23 01:37:18.380: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:19.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:37:19.791: INFO: rc: 1 Oct 23 01:37:19.791: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:20.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:37:20.758: INFO: rc: 1 Oct 23 01:37:20.758: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:21.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:37:21.708: INFO: rc: 1 Oct 23 01:37:21.708: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:22.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:37:22.395: INFO: rc: 1 Oct 23 01:37:22.395: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:23.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:37:23.389: INFO: rc: 1 Oct 23 01:37:23.389: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:24.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:37:24.400: INFO: rc: 1 Oct 23 01:37:24.400: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:25.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:37:25.414: INFO: rc: 1 Oct 23 01:37:25.414: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:26.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:37:26.389: INFO: rc: 1 Oct 23 01:37:26.389: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:27.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:37:27.395: INFO: rc: 1 Oct 23 01:37:27.395: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:28.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:37:28.497: INFO: rc: 1 Oct 23 01:37:28.497: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:29.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:37:29.388: INFO: rc: 1 Oct 23 01:37:29.388: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:30.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:37:30.420: INFO: rc: 1 Oct 23 01:37:30.421: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:31.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:37:31.408: INFO: rc: 1 Oct 23 01:37:31.408: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:32.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:37:32.444: INFO: rc: 1 Oct 23 01:37:32.444: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:33.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:37:33.455: INFO: rc: 1 Oct 23 01:37:33.455: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:34.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:37:34.530: INFO: rc: 1 Oct 23 01:37:34.530: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:35.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:37:35.499: INFO: rc: 1 Oct 23 01:37:35.499: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:36.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:37:36.555: INFO: rc: 1 Oct 23 01:37:36.556: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:37.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:37:37.446: INFO: rc: 1 Oct 23 01:37:37.446: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:38.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:37:38.387: INFO: rc: 1 Oct 23 01:37:38.387: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:39.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:37:39.376: INFO: rc: 1 Oct 23 01:37:39.376: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:40.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:37:40.408: INFO: rc: 1 Oct 23 01:37:40.408: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:41.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:37:41.405: INFO: rc: 1 Oct 23 01:37:41.405: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:42.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:37:42.396: INFO: rc: 1 Oct 23 01:37:42.397: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31458 + echo hostName nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:43.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:37:43.394: INFO: rc: 1 Oct 23 01:37:43.394: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:44.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:37:44.375: INFO: rc: 1 Oct 23 01:37:44.375: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:45.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:37:45.371: INFO: rc: 1 Oct 23 01:37:45.371: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:46.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:37:46.630: INFO: rc: 1 Oct 23 01:37:46.630: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:47.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:37:47.535: INFO: rc: 1 Oct 23 01:37:47.535: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:48.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:37:48.401: INFO: rc: 1 Oct 23 01:37:48.402: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:49.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:37:50.300: INFO: rc: 1 Oct 23 01:37:50.300: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:51.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:37:51.401: INFO: rc: 1 Oct 23 01:37:51.402: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:52.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:37:52.534: INFO: rc: 1 Oct 23 01:37:52.534: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:53.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:37:53.408: INFO: rc: 1 Oct 23 01:37:53.408: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:54.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:37:54.411: INFO: rc: 1 Oct 23 01:37:54.411: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:55.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:37:55.417: INFO: rc: 1 Oct 23 01:37:55.417: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:56.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:37:56.410: INFO: rc: 1 Oct 23 01:37:56.410: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:57.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:37:57.411: INFO: rc: 1 Oct 23 01:37:57.411: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:58.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:37:58.396: INFO: rc: 1 Oct 23 01:37:58.396: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:37:59.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:37:59.398: INFO: rc: 1 Oct 23 01:37:59.399: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:38:00.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:38:01.544: INFO: rc: 1 Oct 23 01:38:01.544: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:38:02.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:38:03.167: INFO: rc: 1 Oct 23 01:38:03.167: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:38:04.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:38:07.675: INFO: rc: 1 Oct 23 01:38:07.675: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:38:08.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:38:08.421: INFO: rc: 1 Oct 23 01:38:08.422: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:38:09.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:38:09.400: INFO: rc: 1 Oct 23 01:38:09.400: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:38:10.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:38:10.418: INFO: rc: 1 Oct 23 01:38:10.419: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:38:11.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:38:11.390: INFO: rc: 1 Oct 23 01:38:11.390: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:38:12.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:38:12.398: INFO: rc: 1 Oct 23 01:38:12.398: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:38:13.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:38:13.390: INFO: rc: 1 Oct 23 01:38:13.390: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:38:14.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:38:14.383: INFO: rc: 1 Oct 23 01:38:14.383: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:38:15.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:38:15.386: INFO: rc: 1 Oct 23 01:38:15.386: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:38:16.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:38:16.424: INFO: rc: 1 Oct 23 01:38:16.424: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:38:17.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:38:17.427: INFO: rc: 1 Oct 23 01:38:17.427: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:38:18.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:38:18.449: INFO: rc: 1 Oct 23 01:38:18.449: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:38:19.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:38:19.509: INFO: rc: 1 Oct 23 01:38:19.509: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:38:20.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:38:20.687: INFO: rc: 1 Oct 23 01:38:20.687: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:38:21.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:38:21.821: INFO: rc: 1 Oct 23 01:38:21.821: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:38:22.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:38:22.471: INFO: rc: 1 Oct 23 01:38:22.471: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:38:23.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:38:23.480: INFO: rc: 1 Oct 23 01:38:23.481: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31458 + echo hostName nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:38:24.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:38:24.433: INFO: rc: 1 Oct 23 01:38:24.433: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:38:25.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:38:25.812: INFO: rc: 1 Oct 23 01:38:25.812: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:38:26.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:38:26.391: INFO: rc: 1 Oct 23 01:38:26.391: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:38:27.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:38:27.439: INFO: rc: 1 Oct 23 01:38:27.439: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:38:28.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:38:28.530: INFO: rc: 1 Oct 23 01:38:28.530: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:38:29.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:38:29.398: INFO: rc: 1 Oct 23 01:38:29.398: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:38:30.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:38:30.422: INFO: rc: 1 Oct 23 01:38:30.422: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:38:31.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:38:31.400: INFO: rc: 1 Oct 23 01:38:31.400: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:38:32.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:38:32.406: INFO: rc: 1 Oct 23 01:38:32.406: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:38:33.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:38:33.451: INFO: rc: 1 Oct 23 01:38:33.451: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:38:33.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458' Oct 23 01:38:33.688: INFO: rc: 1 Oct 23 01:38:33.688: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4116 exec execpod-affinity779cd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31458: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31458 nc: connect to 10.10.190.207 port 31458 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 01:38:33.689: FAIL: Unexpected error: <*errors.errorString | 0xc00827c580>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31458 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31458 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForSessionAffinityTimeout(0xc0015d5340, 0x779f8f8, 0xc0048591e0, 0xc001898280) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2493 +0x751 k8s.io/kubernetes/test/e2e/network.glob..func24.26() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1846 +0x9c k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001456180) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc001456180) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc001456180, 0x70e7b58) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 Oct 23 01:38:33.690: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-4116, will wait for the garbage collector to delete the pods Oct 23 01:38:33.767: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 5.45151ms Oct 23 01:38:33.867: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 100.530571ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-4116". STEP: Found 33 events. Oct 23 01:38:44.284: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-timeout-brs4b: { } Scheduled: Successfully assigned services-4116/affinity-nodeport-timeout-brs4b to node2 Oct 23 01:38:44.284: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-timeout-db4ph: { } Scheduled: Successfully assigned services-4116/affinity-nodeport-timeout-db4ph to node2 Oct 23 01:38:44.285: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-timeout-r4j9h: { } Scheduled: Successfully assigned services-4116/affinity-nodeport-timeout-r4j9h to node1 Oct 23 01:38:44.285: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpod-affinity779cd: { } Scheduled: Successfully assigned services-4116/execpod-affinity779cd to node1 Oct 23 01:38:44.285: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for kube-proxy-mode-detector: { } Scheduled: Successfully assigned services-4116/kube-proxy-mode-detector to node2 Oct 23 01:38:44.285: INFO: At 2021-10-23 01:36:17 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 311.293276ms Oct 23 01:38:44.285: INFO: At 2021-10-23 01:36:17 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 23 01:38:44.285: INFO: At 2021-10-23 01:36:18 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node2} Created: Created container agnhost-container Oct 23 01:38:44.285: INFO: At 2021-10-23 01:36:18 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node2} Started: Started container agnhost-container Oct 23 01:38:44.285: INFO: At 2021-10-23 01:36:21 +0000 UTC - event for affinity-nodeport-timeout: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-timeout-brs4b Oct 23 01:38:44.285: INFO: At 2021-10-23 01:36:21 +0000 UTC - event for affinity-nodeport-timeout: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-timeout-r4j9h Oct 23 01:38:44.285: INFO: At 2021-10-23 01:36:21 +0000 UTC - event for affinity-nodeport-timeout: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-timeout-db4ph Oct 23 01:38:44.285: INFO: At 2021-10-23 01:36:21 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node2} Killing: Stopping container agnhost-container Oct 23 01:38:44.285: INFO: At 2021-10-23 01:36:23 +0000 UTC - event for affinity-nodeport-timeout-brs4b: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 23 01:38:44.285: INFO: At 2021-10-23 01:36:23 +0000 UTC - event for affinity-nodeport-timeout-brs4b: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 316.701768ms Oct 23 01:38:44.285: INFO: At 2021-10-23 01:36:23 +0000 UTC - event for affinity-nodeport-timeout-brs4b: {kubelet node2} Created: Created container affinity-nodeport-timeout Oct 23 01:38:44.285: INFO: At 2021-10-23 01:36:23 +0000 UTC - event for affinity-nodeport-timeout-db4ph: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 23 01:38:44.285: INFO: At 2021-10-23 01:36:23 +0000 UTC - event for affinity-nodeport-timeout-r4j9h: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 23 01:38:44.285: INFO: At 2021-10-23 01:36:24 +0000 UTC - event for affinity-nodeport-timeout-brs4b: {kubelet node2} Started: Started container affinity-nodeport-timeout Oct 23 01:38:44.285: INFO: At 2021-10-23 01:36:24 +0000 UTC - event for affinity-nodeport-timeout-db4ph: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 613.1507ms Oct 23 01:38:44.285: INFO: At 2021-10-23 01:36:24 +0000 UTC - event for affinity-nodeport-timeout-db4ph: {kubelet node2} Started: Started container affinity-nodeport-timeout Oct 23 01:38:44.285: INFO: At 2021-10-23 01:36:24 +0000 UTC - event for affinity-nodeport-timeout-db4ph: {kubelet node2} Created: Created container affinity-nodeport-timeout Oct 23 01:38:44.285: INFO: At 2021-10-23 01:36:24 +0000 UTC - event for affinity-nodeport-timeout-r4j9h: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 768.865085ms Oct 23 01:38:44.285: INFO: At 2021-10-23 01:36:24 +0000 UTC - event for affinity-nodeport-timeout-r4j9h: {kubelet node1} Created: Created container affinity-nodeport-timeout Oct 23 01:38:44.285: INFO: At 2021-10-23 01:36:25 +0000 UTC - event for affinity-nodeport-timeout-r4j9h: {kubelet node1} Started: Started container affinity-nodeport-timeout Oct 23 01:38:44.285: INFO: At 2021-10-23 01:36:29 +0000 UTC - event for execpod-affinity779cd: {kubelet node1} Started: Started container agnhost-container Oct 23 01:38:44.285: INFO: At 2021-10-23 01:36:29 +0000 UTC - event for execpod-affinity779cd: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 315.734884ms Oct 23 01:38:44.285: INFO: At 2021-10-23 01:36:29 +0000 UTC - event for execpod-affinity779cd: {kubelet node1} Created: Created container agnhost-container Oct 23 01:38:44.285: INFO: At 2021-10-23 01:36:29 +0000 UTC - event for execpod-affinity779cd: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 23 01:38:44.285: INFO: At 2021-10-23 01:38:33 +0000 UTC - event for affinity-nodeport-timeout-brs4b: {kubelet node2} Killing: Stopping container affinity-nodeport-timeout Oct 23 01:38:44.285: INFO: At 2021-10-23 01:38:33 +0000 UTC - event for affinity-nodeport-timeout-db4ph: {kubelet node2} Killing: Stopping container affinity-nodeport-timeout Oct 23 01:38:44.285: INFO: At 2021-10-23 01:38:33 +0000 UTC - event for affinity-nodeport-timeout-r4j9h: {kubelet node1} Killing: Stopping container affinity-nodeport-timeout Oct 23 01:38:44.285: INFO: At 2021-10-23 01:38:33 +0000 UTC - event for execpod-affinity779cd: {kubelet node1} Killing: Stopping container agnhost-container Oct 23 01:38:44.287: INFO: POD NODE PHASE GRACE CONDITIONS Oct 23 01:38:44.287: INFO: Oct 23 01:38:44.291: INFO: Logging node info for node master1 Oct 23 01:38:44.294: INFO: Node Info: &Node{ObjectMeta:{master1 1b0e9b6c-fa73-4303-880f-3c662903b3ba 100829 0 2021-10-22 21:03:37 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-22 21:03:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2021-10-22 21:03:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2021-10-22 21:06:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2021-10-22 21:11:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:07 +0000 UTC,LastTransitionTime:2021-10-22 21:09:07 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 01:38:39 +0000 UTC,LastTransitionTime:2021-10-22 21:03:34 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 01:38:39 +0000 UTC,LastTransitionTime:2021-10-22 21:03:34 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 01:38:39 +0000 UTC,LastTransitionTime:2021-10-22 21:03:34 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 01:38:39 +0000 UTC,LastTransitionTime:2021-10-22 21:09:03 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:30ce143f9c9243b59253027a77cdbf77,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:e78651c4-73ca-42e7-8083-bc7c7ebac4b6,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:519ce66d3ef90d7545f5679b670aa50393adfbe9785a720ba26ce3ec4b263c5d tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:2012644549052fa07c43b0d19f320c871a25e105d0b23e33645e4f1bcf8fcd97 nginx:1.20.1-alpine],SizeBytes:22650454,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 01:38:44.295: INFO: Logging kubelet events for node master1 Oct 23 01:38:44.297: INFO: Logging pods the kubelet thinks is on node master1 Oct 23 01:38:44.318: INFO: kube-apiserver-master1 started at 2021-10-22 21:09:03 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:44.318: INFO: Container kube-apiserver ready: true, restart count 0 Oct 23 01:38:44.318: INFO: kube-controller-manager-master1 started at 2021-10-22 21:12:54 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:44.318: INFO: Container kube-controller-manager ready: true, restart count 1 Oct 23 01:38:44.318: INFO: kube-proxy-fhqkt started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:44.318: INFO: Container kube-proxy ready: true, restart count 1 Oct 23 01:38:44.318: INFO: kube-flannel-8vnf2 started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 01:38:44.318: INFO: Init container install-cni ready: true, restart count 1 Oct 23 01:38:44.318: INFO: Container kube-flannel ready: true, restart count 1 Oct 23 01:38:44.318: INFO: kube-multus-ds-amd64-vl8qj started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:44.318: INFO: Container kube-multus ready: true, restart count 1 Oct 23 01:38:44.318: INFO: coredns-8474476ff8-q8d8x started at 2021-10-22 21:07:01 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:44.318: INFO: Container coredns ready: true, restart count 2 Oct 23 01:38:44.318: INFO: container-registry-65d7c44b96-wtz5j started at 2021-10-22 21:10:37 +0000 UTC (0+2 container statuses recorded) Oct 23 01:38:44.318: INFO: Container docker-registry ready: true, restart count 0 Oct 23 01:38:44.318: INFO: Container nginx ready: true, restart count 0 Oct 23 01:38:44.318: INFO: node-exporter-fxb7q started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 01:38:44.318: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 01:38:44.318: INFO: Container node-exporter ready: true, restart count 0 Oct 23 01:38:44.318: INFO: kube-scheduler-master1 started at 2021-10-22 21:22:33 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:44.318: INFO: Container kube-scheduler ready: true, restart count 0 W1023 01:38:44.335098 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 01:38:44.405: INFO: Latency metrics for node master1 Oct 23 01:38:44.405: INFO: Logging node info for node master2 Oct 23 01:38:44.408: INFO: Node Info: &Node{ObjectMeta:{master2 48070097-b11c-473d-9240-f4ee02bd7e2f 100948 0 2021-10-22 21:04:08 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-22 21:04:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-10-22 21:17:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:14 +0000 UTC,LastTransitionTime:2021-10-22 21:09:14 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 01:38:43 +0000 UTC,LastTransitionTime:2021-10-22 21:04:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 01:38:43 +0000 UTC,LastTransitionTime:2021-10-22 21:04:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 01:38:43 +0000 UTC,LastTransitionTime:2021-10-22 21:04:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 01:38:43 +0000 UTC,LastTransitionTime:2021-10-22 21:06:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c5d510cf1060448cb87a1d02cd1f2972,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:8ec7c43d-60d2-4abb-84a1-5a37f3283118,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 01:38:44.408: INFO: Logging kubelet events for node master2 Oct 23 01:38:44.411: INFO: Logging pods the kubelet thinks is on node master2 Oct 23 01:38:44.419: INFO: kube-apiserver-master2 started at 2021-10-22 21:04:46 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:44.419: INFO: Container kube-apiserver ready: true, restart count 0 Oct 23 01:38:44.419: INFO: dns-autoscaler-7df78bfcfb-9ss69 started at 2021-10-22 21:06:58 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:44.419: INFO: Container autoscaler ready: true, restart count 1 Oct 23 01:38:44.419: INFO: node-exporter-vljkh started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 01:38:44.419: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 01:38:44.419: INFO: Container node-exporter ready: true, restart count 0 Oct 23 01:38:44.419: INFO: kube-controller-manager-master2 started at 2021-10-22 21:12:54 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:44.419: INFO: Container kube-controller-manager ready: true, restart count 2 Oct 23 01:38:44.419: INFO: kube-scheduler-master2 started at 2021-10-22 21:12:54 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:44.419: INFO: Container kube-scheduler ready: true, restart count 2 Oct 23 01:38:44.419: INFO: kube-proxy-2xlf2 started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:44.419: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 01:38:44.419: INFO: kube-flannel-tfkj9 started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 01:38:44.419: INFO: Init container install-cni ready: true, restart count 2 Oct 23 01:38:44.419: INFO: Container kube-flannel ready: true, restart count 1 Oct 23 01:38:44.419: INFO: kube-multus-ds-amd64-m8ztc started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:44.419: INFO: Container kube-multus ready: true, restart count 1 W1023 01:38:44.434466 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 01:38:44.493: INFO: Latency metrics for node master2 Oct 23 01:38:44.493: INFO: Logging node info for node master3 Oct 23 01:38:44.497: INFO: Node Info: &Node{ObjectMeta:{master3 fe22a467-e2de-4b64-9399-d274e6d13231 100819 0 2021-10-22 21:04:18 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-22 21:04:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-10-22 21:14:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2021-10-22 21:14:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:08 +0000 UTC,LastTransitionTime:2021-10-22 21:09:08 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 01:38:38 +0000 UTC,LastTransitionTime:2021-10-22 21:04:18 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 01:38:38 +0000 UTC,LastTransitionTime:2021-10-22 21:04:18 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 01:38:38 +0000 UTC,LastTransitionTime:2021-10-22 21:04:18 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 01:38:38 +0000 UTC,LastTransitionTime:2021-10-22 21:09:03 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:55ed55d7ecb94c5fbcecb32cb3747801,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:7e00baa8-f631-4d7e-baa1-cb915fbb1ea7,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 01:38:44.498: INFO: Logging kubelet events for node master3 Oct 23 01:38:44.501: INFO: Logging pods the kubelet thinks is on node master3 Oct 23 01:38:44.510: INFO: kube-multus-ds-amd64-tfbmd started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:44.510: INFO: Container kube-multus ready: true, restart count 1 Oct 23 01:38:44.510: INFO: coredns-8474476ff8-7wlfp started at 2021-10-22 21:06:56 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:44.510: INFO: Container coredns ready: true, restart count 2 Oct 23 01:38:44.510: INFO: kube-scheduler-master3 started at 2021-10-22 21:04:46 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:44.510: INFO: Container kube-scheduler ready: true, restart count 2 Oct 23 01:38:44.510: INFO: kube-controller-manager-master3 started at 2021-10-22 21:09:03 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:44.510: INFO: Container kube-controller-manager ready: true, restart count 2 Oct 23 01:38:44.510: INFO: kube-proxy-l7st4 started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:44.510: INFO: Container kube-proxy ready: true, restart count 1 Oct 23 01:38:44.510: INFO: kube-flannel-rf9mv started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 01:38:44.510: INFO: Init container install-cni ready: true, restart count 1 Oct 23 01:38:44.510: INFO: Container kube-flannel ready: true, restart count 1 Oct 23 01:38:44.510: INFO: node-feature-discovery-controller-cff799f9f-dgsfd started at 2021-10-22 21:14:11 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:44.510: INFO: Container nfd-controller ready: true, restart count 0 Oct 23 01:38:44.510: INFO: node-exporter-b22mw started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 01:38:44.510: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 01:38:44.510: INFO: Container node-exporter ready: true, restart count 0 Oct 23 01:38:44.510: INFO: kube-apiserver-master3 started at 2021-10-22 21:09:03 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:44.510: INFO: Container kube-apiserver ready: true, restart count 0 W1023 01:38:44.526666 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 01:38:44.592: INFO: Latency metrics for node master3 Oct 23 01:38:44.592: INFO: Logging node info for node node1 Oct 23 01:38:44.595: INFO: Node Info: &Node{ObjectMeta:{node1 1c590bf6-8845-4681-8fa1-7acc55183d29 100853 0 2021-10-22 21:05:23 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-10-22 21:05:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-10-22 21:05:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-22 21:14:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-22 21:17:46 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-10-23 01:19:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:10 +0000 UTC,LastTransitionTime:2021-10-22 21:09:10 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 01:38:41 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 01:38:41 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 01:38:41 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 01:38:41 +0000 UTC,LastTransitionTime:2021-10-22 21:06:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f11a4b4c58ac4a4eb06ac043edeefa84,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:50e64d70-ffd2-496a-957a-81f1931a6b6e,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003429679,},ContainerImage{Names:[localhost:30500/cmk@sha256:ba2eda55192ece5488254511709b932e8a99f600af8261a9f2a89d0dbc9b8fec cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:c3256608afd18299ac7559d97ec0a80149d265b35d2eeeb33a053826e486886a nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:a296b4c6f6ee2b88f095b61e95c7ef4f51ba25598835b4978c9256d8c8ace48a alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 01:38:44.596: INFO: Logging kubelet events for node node1 Oct 23 01:38:44.598: INFO: Logging pods the kubelet thinks is on node node1 Oct 23 01:38:44.615: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-sjjtd started at 2021-10-22 21:15:26 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:44.615: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 01:38:44.615: INFO: kubernetes-dashboard-785dcbb76d-kc4kh started at 2021-10-22 21:07:01 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:44.615: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 23 01:38:44.615: INFO: prometheus-k8s-0 started at 2021-10-22 21:19:48 +0000 UTC (0+4 container statuses recorded) Oct 23 01:38:44.615: INFO: Container config-reloader ready: true, restart count 0 Oct 23 01:38:44.615: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 23 01:38:44.615: INFO: Container grafana ready: true, restart count 0 Oct 23 01:38:44.615: INFO: Container prometheus ready: true, restart count 1 Oct 23 01:38:44.615: INFO: collectd-n9sbv started at 2021-10-22 21:23:20 +0000 UTC (0+3 container statuses recorded) Oct 23 01:38:44.615: INFO: Container collectd ready: true, restart count 0 Oct 23 01:38:44.615: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 01:38:44.615: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 01:38:44.615: INFO: simpletest.rc-d49qr started at 2021-10-23 01:37:32 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:44.615: INFO: Container nginx ready: true, restart count 0 Oct 23 01:38:44.615: INFO: prometheus-operator-585ccfb458-hwjk2 started at 2021-10-22 21:19:21 +0000 UTC (0+2 container statuses recorded) Oct 23 01:38:44.615: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 01:38:44.615: INFO: Container prometheus-operator ready: true, restart count 0 Oct 23 01:38:44.615: INFO: simpletest.rc-95b8r started at 2021-10-23 01:37:32 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:44.615: INFO: Container nginx ready: true, restart count 0 Oct 23 01:38:44.615: INFO: simpletest.rc-292vs started at 2021-10-23 01:37:32 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:44.615: INFO: Container nginx ready: true, restart count 0 Oct 23 01:38:44.615: INFO: kube-proxy-m9z8s started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:44.615: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 01:38:44.615: INFO: kube-multus-ds-amd64-l97s4 started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:44.615: INFO: Container kube-multus ready: true, restart count 1 Oct 23 01:38:44.615: INFO: node-feature-discovery-worker-2pvq5 started at 2021-10-22 21:14:11 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:44.615: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 01:38:44.615: INFO: ss-1 started at 2021-10-23 01:38:16 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:44.615: INFO: Container webserver ready: false, restart count 0 Oct 23 01:38:44.615: INFO: cmk-init-discover-node1-c599w started at 2021-10-22 21:17:43 +0000 UTC (0+3 container statuses recorded) Oct 23 01:38:44.615: INFO: Container discover ready: false, restart count 0 Oct 23 01:38:44.615: INFO: Container init ready: false, restart count 0 Oct 23 01:38:44.615: INFO: Container install ready: false, restart count 0 Oct 23 01:38:44.615: INFO: cmk-t9r2t started at 2021-10-22 21:18:25 +0000 UTC (0+2 container statuses recorded) Oct 23 01:38:44.615: INFO: Container nodereport ready: true, restart count 0 Oct 23 01:38:44.615: INFO: Container reconcile ready: true, restart count 0 Oct 23 01:38:44.615: INFO: pod-projected-configmaps-2a59c82c-f11a-40e6-a8b8-fd3987c772a3 started at 2021-10-23 01:38:42 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:44.615: INFO: Container agnhost-container ready: false, restart count 0 Oct 23 01:38:44.615: INFO: nginx-proxy-node1 started at 2021-10-22 21:05:23 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:44.615: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 01:38:44.615: INFO: simpletest.rc-99m8l started at 2021-10-23 01:37:32 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:44.615: INFO: Container nginx ready: true, restart count 0 Oct 23 01:38:44.615: INFO: ss-0 started at 2021-10-23 01:37:45 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:44.615: INFO: Container webserver ready: false, restart count 0 Oct 23 01:38:44.615: INFO: kube-flannel-2cdvd started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 01:38:44.615: INFO: Init container install-cni ready: true, restart count 2 Oct 23 01:38:44.615: INFO: Container kube-flannel ready: true, restart count 3 Oct 23 01:38:44.615: INFO: kubernetes-metrics-scraper-5558854cb-dfn2n started at 2021-10-22 21:07:01 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:44.615: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 23 01:38:44.615: INFO: node-exporter-v656r started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 01:38:44.615: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 01:38:44.615: INFO: Container node-exporter ready: true, restart count 0 Oct 23 01:38:44.615: INFO: pod-configmaps-71668d76-7d7c-4bb8-b70e-b9a0891edd52 started at 2021-10-23 01:37:23 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:44.615: INFO: Container agnhost-container ready: true, restart count 0 W1023 01:38:44.631325 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 01:38:44.905: INFO: Latency metrics for node node1 Oct 23 01:38:44.905: INFO: Logging node info for node node2 Oct 23 01:38:44.910: INFO: Node Info: &Node{ObjectMeta:{node2 bdba54c1-d4eb-4c09-a343-50f320ccb048 100741 0 2021-10-22 21:05:23 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-10-22 21:05:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-10-22 21:05:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-22 21:14:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-22 21:18:08 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-10-23 01:20:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:example.com/fakecpu":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {e2e.test Update v1 2021-10-23 01:28:00 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:08 +0000 UTC,LastTransitionTime:2021-10-22 21:09:08 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 01:38:37 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 01:38:37 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 01:38:37 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 01:38:37 +0000 UTC,LastTransitionTime:2021-10-22 21:06:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:82312646736a4d47a5e2182417308818,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:045f38e2-ca45-4931-a8ac-a14f5e34cbd2,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[localhost:30500/cmk@sha256:ba2eda55192ece5488254511709b932e8a99f600af8261a9f2a89d0dbc9b8fec localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:c3256608afd18299ac7559d97ec0a80149d265b35d2eeeb33a053826e486886a localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[localhost:30500/tasextender@sha256:519ce66d3ef90d7545f5679b670aa50393adfbe9785a720ba26ce3ec4b263c5d localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 01:38:44.910: INFO: Logging kubelet events for node node2 Oct 23 01:38:44.913: INFO: Logging pods the kubelet thinks is on node node2 Oct 23 01:38:44.927: INFO: nginx-proxy-node2 started at 2021-10-22 21:05:23 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:44.927: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 01:38:44.927: INFO: busybox-readonly-fs281bbf9e-74b9-41e5-8d2a-28454ace7ba8 started at 2021-10-23 01:38:14 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:44.927: INFO: Container busybox-readonly-fs281bbf9e-74b9-41e5-8d2a-28454ace7ba8 ready: true, restart count 0 Oct 23 01:38:44.927: INFO: cmk-kn29k started at 2021-10-22 21:18:25 +0000 UTC (0+2 container statuses recorded) Oct 23 01:38:44.927: INFO: Container nodereport ready: true, restart count 1 Oct 23 01:38:44.927: INFO: Container reconcile ready: true, restart count 0 Oct 23 01:38:44.927: INFO: simpletest.rc-nd8mp started at 2021-10-23 01:37:32 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:44.927: INFO: Container nginx ready: true, restart count 0 Oct 23 01:38:44.927: INFO: simpletest.rc-ghmq9 started at 2021-10-23 01:37:32 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:44.927: INFO: Container nginx ready: true, restart count 0 Oct 23 01:38:44.928: INFO: simpletest.rc-9vk5x started at 2021-10-23 01:37:32 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:44.928: INFO: Container nginx ready: true, restart count 0 Oct 23 01:38:44.928: INFO: simpletest.rc-bmrm9 started at 2021-10-23 01:37:32 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:44.928: INFO: Container nginx ready: true, restart count 0 Oct 23 01:38:44.928: INFO: busybox-579e25c2-6f03-4f69-acfa-d1ee66ffb6bd started at 2021-10-23 01:38:43 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:44.928: INFO: Container busybox ready: false, restart count 0 Oct 23 01:38:44.928: INFO: kube-proxy-5h2bl started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:44.928: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 01:38:44.928: INFO: cmk-init-discover-node2-2btnq started at 2021-10-22 21:18:03 +0000 UTC (0+3 container statuses recorded) Oct 23 01:38:44.928: INFO: Container discover ready: false, restart count 0 Oct 23 01:38:44.928: INFO: Container init ready: false, restart count 0 Oct 23 01:38:44.928: INFO: Container install ready: false, restart count 0 Oct 23 01:38:44.928: INFO: cmk-webhook-6c9d5f8578-pkwhc started at 2021-10-22 21:18:26 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:44.928: INFO: Container cmk-webhook ready: true, restart count 0 Oct 23 01:38:44.928: INFO: kube-multus-ds-amd64-fww5b started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:44.928: INFO: Container kube-multus ready: true, restart count 1 Oct 23 01:38:44.928: INFO: node-feature-discovery-worker-8k8m5 started at 2021-10-22 21:14:11 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:44.928: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 01:38:44.928: INFO: pod-projected-configmaps-72d24e9a-bc17-4a3f-a2cc-cc5d3da8221d started at 2021-10-23 01:37:20 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:44.928: INFO: Container agnhost-container ready: true, restart count 0 Oct 23 01:38:44.928: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zhcfq started at 2021-10-22 21:15:26 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:44.928: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 01:38:44.928: INFO: node-exporter-fjc79 started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 01:38:44.928: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 01:38:44.928: INFO: Container node-exporter ready: true, restart count 0 Oct 23 01:38:44.928: INFO: simpletest.rc-gncll started at 2021-10-23 01:37:32 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:44.928: INFO: Container nginx ready: true, restart count 0 Oct 23 01:38:44.928: INFO: ss-2 started at 2021-10-23 01:38:19 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:44.928: INFO: Container webserver ready: false, restart count 0 Oct 23 01:38:44.928: INFO: pod-init-54434288-d482-4b5d-8c60-3bd2f3580d47 started at 2021-10-23 01:38:33 +0000 UTC (2+1 container statuses recorded) Oct 23 01:38:44.928: INFO: Init container init1 ready: true, restart count 0 Oct 23 01:38:44.928: INFO: Init container init2 ready: true, restart count 0 Oct 23 01:38:44.928: INFO: Container run1 ready: true, restart count 0 Oct 23 01:38:44.928: INFO: simpletest.rc-mwbkf started at 2021-10-23 01:37:32 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:44.928: INFO: Container nginx ready: true, restart count 0 Oct 23 01:38:44.928: INFO: test-rollover-controller-b4shd started at 2021-10-23 01:38:44 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:44.928: INFO: Container httpd ready: false, restart count 0 Oct 23 01:38:44.928: INFO: kube-flannel-xx6ls started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 01:38:44.928: INFO: Init container install-cni ready: true, restart count 1 Oct 23 01:38:44.928: INFO: Container kube-flannel ready: true, restart count 2 Oct 23 01:38:44.928: INFO: tas-telemetry-aware-scheduling-84ff454dfb-gltgg started at 2021-10-22 21:22:32 +0000 UTC (0+1 container statuses recorded) Oct 23 01:38:44.928: INFO: Container tas-extender ready: true, restart count 0 Oct 23 01:38:44.928: INFO: collectd-xhdgw started at 2021-10-22 21:23:20 +0000 UTC (0+3 container statuses recorded) Oct 23 01:38:44.928: INFO: Container collectd ready: true, restart count 0 Oct 23 01:38:44.928: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 01:38:44.928: INFO: Container rbac-proxy ready: true, restart count 0 W1023 01:38:44.937486 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 01:38:46.520: INFO: Latency metrics for node node2 Oct 23 01:38:46.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4116" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [149.599 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 01:38:33.689: Unexpected error: <*errors.errorString | 0xc00827c580>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31458 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31458 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2493 ------------------------------ {"msg":"FAILED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":18,"skipped":288,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":494,"failed":0} [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:38:42.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-map-c9abfc21-829b-4c94-b848-45bf74bf4679 STEP: Creating a pod to test consume configMaps Oct 23 01:38:42.834: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2a59c82c-f11a-40e6-a8b8-fd3987c772a3" in namespace "projected-2695" to be "Succeeded or Failed" Oct 23 01:38:42.836: INFO: Pod "pod-projected-configmaps-2a59c82c-f11a-40e6-a8b8-fd3987c772a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.295552ms Oct 23 01:38:44.840: INFO: Pod "pod-projected-configmaps-2a59c82c-f11a-40e6-a8b8-fd3987c772a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005835077s Oct 23 01:38:46.844: INFO: Pod "pod-projected-configmaps-2a59c82c-f11a-40e6-a8b8-fd3987c772a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010583203s STEP: Saw pod success Oct 23 01:38:46.844: INFO: Pod "pod-projected-configmaps-2a59c82c-f11a-40e6-a8b8-fd3987c772a3" satisfied condition "Succeeded or Failed" Oct 23 01:38:46.847: INFO: Trying to get logs from node node1 pod pod-projected-configmaps-2a59c82c-f11a-40e6-a8b8-fd3987c772a3 container agnhost-container: STEP: delete the pod Oct 23 01:38:46.858: INFO: Waiting for pod pod-projected-configmaps-2a59c82c-f11a-40e6-a8b8-fd3987c772a3 to disappear Oct 23 01:38:46.860: INFO: Pod pod-projected-configmaps-2a59c82c-f11a-40e6-a8b8-fd3987c772a3 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:38:46.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2695" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":494,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:38:46.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 01:38:46.616: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-3b5360fc-95a2-430a-98c7-75013493438e" in namespace "security-context-test-9453" to be "Succeeded or Failed" Oct 23 01:38:46.618: INFO: Pod "alpine-nnp-false-3b5360fc-95a2-430a-98c7-75013493438e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.37289ms Oct 23 01:38:48.625: INFO: Pod "alpine-nnp-false-3b5360fc-95a2-430a-98c7-75013493438e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009057649s Oct 23 01:38:50.628: INFO: Pod "alpine-nnp-false-3b5360fc-95a2-430a-98c7-75013493438e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012362906s Oct 23 01:38:52.632: INFO: Pod "alpine-nnp-false-3b5360fc-95a2-430a-98c7-75013493438e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01579605s Oct 23 01:38:52.632: INFO: Pod "alpine-nnp-false-3b5360fc-95a2-430a-98c7-75013493438e" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:38:52.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9453" for this suite. • [SLOW TEST:6.063 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when creating containers with AllowPrivilegeEscalation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":311,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:38:46.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 23 01:38:47.146: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 23 01:38:49.155: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549927, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549927, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549927, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549927, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 23 01:38:51.161: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549927, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549927, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549927, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549927, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 23 01:38:54.180: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:38:54.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1058" for this suite. STEP: Destroying namespace "webhook-1058-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.367 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":-1,"completed":31,"skipped":502,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:37:23.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-upd-9f3007a1-1860-405d-8b49-45eeaf90e186 STEP: Creating the pod Oct 23 01:37:23.603: INFO: The status of Pod pod-configmaps-71668d76-7d7c-4bb8-b70e-b9a0891edd52 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:37:25.607: INFO: The status of Pod pod-configmaps-71668d76-7d7c-4bb8-b70e-b9a0891edd52 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:37:27.606: INFO: The status of Pod pod-configmaps-71668d76-7d7c-4bb8-b70e-b9a0891edd52 is Running (Ready = true) STEP: Updating configmap configmap-test-upd-9f3007a1-1860-405d-8b49-45eeaf90e186 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:38:57.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8679" for this suite. • [SLOW TEST:93.641 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":342,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} S ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:38:54.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0644 on node default medium Oct 23 01:38:54.328: INFO: Waiting up to 5m0s for pod "pod-9f9df044-195e-4a3b-a70e-5d04cfb9dd10" in namespace "emptydir-6250" to be "Succeeded or Failed" Oct 23 01:38:54.331: INFO: Pod "pod-9f9df044-195e-4a3b-a70e-5d04cfb9dd10": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076354ms Oct 23 01:38:56.335: INFO: Pod "pod-9f9df044-195e-4a3b-a70e-5d04cfb9dd10": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00670541s Oct 23 01:38:58.340: INFO: Pod "pod-9f9df044-195e-4a3b-a70e-5d04cfb9dd10": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011012995s STEP: Saw pod success Oct 23 01:38:58.340: INFO: Pod "pod-9f9df044-195e-4a3b-a70e-5d04cfb9dd10" satisfied condition "Succeeded or Failed" Oct 23 01:38:58.343: INFO: Trying to get logs from node node1 pod pod-9f9df044-195e-4a3b-a70e-5d04cfb9dd10 container test-container: STEP: delete the pod Oct 23 01:38:58.356: INFO: Waiting for pod pod-9f9df044-195e-4a3b-a70e-5d04cfb9dd10 to disappear Oct 23 01:38:58.358: INFO: Pod pod-9f9df044-195e-4a3b-a70e-5d04cfb9dd10 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:38:58.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6250" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":525,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:38:52.670: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:39:03.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6968" for this suite. • [SLOW TEST:11.066 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":-1,"completed":20,"skipped":324,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:38:43.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 01:38:44.008: INFO: Pod name rollover-pod: Found 0 pods out of 1 Oct 23 01:38:49.011: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Oct 23 01:38:49.012: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Oct 23 01:38:51.015: INFO: Creating deployment "test-rollover-deployment" Oct 23 01:38:51.022: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Oct 23 01:38:53.028: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Oct 23 01:38:53.033: INFO: Ensure that both replica sets have 1 created replica Oct 23 01:38:53.037: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Oct 23 01:38:53.043: INFO: Updating deployment test-rollover-deployment Oct 23 01:38:53.043: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Oct 23 01:38:55.050: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Oct 23 01:38:55.057: INFO: Make sure deployment "test-rollover-deployment" is complete Oct 23 01:38:55.062: INFO: all replica sets need to contain the pod-template-hash label Oct 23 01:38:55.063: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549931, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549931, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549933, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549931, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 23 01:38:57.070: INFO: all replica sets need to contain the pod-template-hash label Oct 23 01:38:57.070: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549931, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549931, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549936, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549931, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 23 01:38:59.070: INFO: all replica sets need to contain the pod-template-hash label Oct 23 01:38:59.070: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549931, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549931, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549936, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549931, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 23 01:39:01.072: INFO: all replica sets need to contain the pod-template-hash label Oct 23 01:39:01.072: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549931, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549931, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549936, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549931, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 23 01:39:03.071: INFO: all replica sets need to contain the pod-template-hash label Oct 23 01:39:03.071: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549931, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549931, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549936, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549931, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 23 01:39:05.070: INFO: all replica sets need to contain the pod-template-hash label Oct 23 01:39:05.070: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549931, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549931, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549936, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549931, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 23 01:39:07.073: INFO: Oct 23 01:39:07.073: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Oct 23 01:39:07.082: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-9039 85ba105e-e7c6-4a2b-a21d-bbcf1877877c 101547 2 2021-10-23 01:38:51 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-10-23 01:38:53 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-10-23 01:39:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0046bbfd8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-10-23 01:38:51 +0000 UTC,LastTransitionTime:2021-10-23 01:38:51 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-98c5f4599" has successfully progressed.,LastUpdateTime:2021-10-23 01:39:06 +0000 UTC,LastTransitionTime:2021-10-23 01:38:51 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Oct 23 01:39:07.085: INFO: New ReplicaSet "test-rollover-deployment-98c5f4599" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-98c5f4599 deployment-9039 ceb55b8f-f0ac-43bc-822d-6394a46b63a2 101536 2 2021-10-23 01:38:53 +0000 UTC map[name:rollover-pod pod-template-hash:98c5f4599] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 85ba105e-e7c6-4a2b-a21d-bbcf1877877c 0xc00471e560 0xc00471e561}] [] [{kube-controller-manager Update apps/v1 2021-10-23 01:39:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"85ba105e-e7c6-4a2b-a21d-bbcf1877877c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 98c5f4599,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:98c5f4599] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00471e5d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Oct 23 01:39:07.085: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Oct 23 01:39:07.085: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-9039 ed1adf6b-870f-4598-8c60-80760c0f6a48 101546 2 2021-10-23 01:38:44 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 85ba105e-e7c6-4a2b-a21d-bbcf1877877c 0xc00471e357 0xc00471e358}] [] [{e2e.test Update apps/v1 2021-10-23 01:38:44 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-10-23 01:39:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"85ba105e-e7c6-4a2b-a21d-bbcf1877877c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00471e3f8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Oct 23 01:39:07.086: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-78bc8b888c deployment-9039 f6f20453-54de-4b71-907c-83408567fd22 101271 2 2021-10-23 01:38:51 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 85ba105e-e7c6-4a2b-a21d-bbcf1877877c 0xc00471e467 0xc00471e468}] [] [{kube-controller-manager Update apps/v1 2021-10-23 01:38:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"85ba105e-e7c6-4a2b-a21d-bbcf1877877c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78bc8b888c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00471e4f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Oct 23 01:39:07.090: INFO: Pod "test-rollover-deployment-98c5f4599-z2jl9" is available: &Pod{ObjectMeta:{test-rollover-deployment-98c5f4599-z2jl9 test-rollover-deployment-98c5f4599- deployment-9039 274b32fe-b818-4658-b0c8-511fd546f2b5 101337 0 2021-10-23 01:38:53 +0000 UTC map[name:rollover-pod pod-template-hash:98c5f4599] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.200" ], "mac": "0e:64:7d:3a:6a:fc", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.200" ], "mac": "0e:64:7d:3a:6a:fc", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-rollover-deployment-98c5f4599 ceb55b8f-f0ac-43bc-822d-6394a46b63a2 0xc00471eacf 0xc00471eae0}] [] [{kube-controller-manager Update v1 2021-10-23 01:38:53 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ceb55b8f-f0ac-43bc-822d-6394a46b63a2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-10-23 01:38:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-10-23 01:38:56 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.200\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-9kxm6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9kxm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:38:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:38:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:38:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 01:38:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.200,StartTime:2021-10-23 01:38:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-23 01:38:56 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:docker://5a6f23142962974eed0de8e45e49a86ecc2f067c849c3af8af894d64e0927bd1,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.200,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:39:07.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9039" for this suite. • [SLOW TEST:23.118 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":-1,"completed":34,"skipped":628,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:39:07.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create set of events Oct 23 01:39:07.179: INFO: created test-event-1 Oct 23 01:39:07.182: INFO: created test-event-2 Oct 23 01:39:07.185: INFO: created test-event-3 STEP: get a list of Events with a label in the current namespace STEP: delete collection of events Oct 23 01:39:07.188: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity Oct 23 01:39:07.210: INFO: requesting list of events to confirm quantity [AfterEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:39:07.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-4433" for this suite. • ------------------------------ {"msg":"PASSED [sig-instrumentation] Events should delete a collection of events [Conformance]","total":-1,"completed":35,"skipped":649,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:39:03.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-map-ebd8d26d-0bd7-4c0d-8023-0a31d726ef55 STEP: Creating a pod to test consume configMaps Oct 23 01:39:03.791: INFO: Waiting up to 5m0s for pod "pod-configmaps-25b2a301-ba8b-462a-a850-1bb6a701f21c" in namespace "configmap-3222" to be "Succeeded or Failed" Oct 23 01:39:03.796: INFO: Pod "pod-configmaps-25b2a301-ba8b-462a-a850-1bb6a701f21c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.965157ms Oct 23 01:39:05.799: INFO: Pod "pod-configmaps-25b2a301-ba8b-462a-a850-1bb6a701f21c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008264104s Oct 23 01:39:07.803: INFO: Pod "pod-configmaps-25b2a301-ba8b-462a-a850-1bb6a701f21c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012155064s STEP: Saw pod success Oct 23 01:39:07.803: INFO: Pod "pod-configmaps-25b2a301-ba8b-462a-a850-1bb6a701f21c" satisfied condition "Succeeded or Failed" Oct 23 01:39:07.806: INFO: Trying to get logs from node node2 pod pod-configmaps-25b2a301-ba8b-462a-a850-1bb6a701f21c container agnhost-container: STEP: delete the pod Oct 23 01:39:07.822: INFO: Waiting for pod pod-configmaps-25b2a301-ba8b-462a-a850-1bb6a701f21c to disappear Oct 23 01:39:07.824: INFO: Pod pod-configmaps-25b2a301-ba8b-462a-a850-1bb6a701f21c no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:39:07.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3222" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":329,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:39:07.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Oct 23 01:39:07.269: INFO: Created pod &Pod{ObjectMeta:{test-dns-nameservers dns-6794 7ab18897-cd87-4b65-966e-2369f7eabf57 101570 0 2021-10-23 01:39:07 +0000 UTC map[] map[kubernetes.io/psp:collectd] [] [] [{e2e.test Update v1 2021-10-23 01:39:07 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-h9kzt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h9kzt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 23 01:39:07.272: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:39:09.276: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:39:11.276: INFO: The status of Pod test-dns-nameservers is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Oct 23 01:39:11.276: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-6794 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 01:39:11.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Verifying customized DNS server is configured on pod... Oct 23 01:39:11.377: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-6794 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 01:39:11.377: INFO: >>> kubeConfig: /root/.kube/config Oct 23 01:39:11.471: INFO: Deleting pod test-dns-nameservers... [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:39:11.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6794" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":-1,"completed":36,"skipped":655,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} S ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:37:32.427: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W1023 01:38:12.489949 35 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 01:39:14.508: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. Oct 23 01:39:14.508: INFO: Deleting pod "simpletest.rc-292vs" in namespace "gc-4596" Oct 23 01:39:14.517: INFO: Deleting pod "simpletest.rc-95b8r" in namespace "gc-4596" Oct 23 01:39:14.524: INFO: Deleting pod "simpletest.rc-99m8l" in namespace "gc-4596" Oct 23 01:39:14.531: INFO: Deleting pod "simpletest.rc-9vk5x" in namespace "gc-4596" Oct 23 01:39:14.537: INFO: Deleting pod "simpletest.rc-bmrm9" in namespace "gc-4596" Oct 23 01:39:14.543: INFO: Deleting pod "simpletest.rc-d49qr" in namespace "gc-4596" Oct 23 01:39:14.548: INFO: Deleting pod "simpletest.rc-ghmq9" in namespace "gc-4596" Oct 23 01:39:14.554: INFO: Deleting pod "simpletest.rc-gncll" in namespace "gc-4596" Oct 23 01:39:14.559: INFO: Deleting pod "simpletest.rc-mwbkf" in namespace "gc-4596" Oct 23 01:39:14.565: INFO: Deleting pod "simpletest.rc-nd8mp" in namespace "gc-4596" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:39:14.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4596" for this suite. • [SLOW TEST:102.152 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":-1,"completed":32,"skipped":629,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:39:14.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:39:14.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-9049" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":-1,"completed":33,"skipped":639,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:38:14.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 01:38:14.899: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:39:16.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8223" for this suite. • [SLOW TEST:61.289 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":-1,"completed":33,"skipped":631,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:39:16.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a test event STEP: listing events in all namespaces STEP: listing events in test namespace STEP: listing events with field selection filtering on source STEP: listing events with field selection filtering on reportingController STEP: getting the test event STEP: patching the test event STEP: getting the test event STEP: updating the test event STEP: getting the test event STEP: deleting the test event STEP: listing events in all namespaces STEP: listing events in test namespace [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:39:16.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-5150" for this suite. • ------------------------------ {"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":34,"skipped":659,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:39:11.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating the pod Oct 23 01:39:11.530: INFO: The status of Pod labelsupdatee6a2840d-7e0d-4ac7-bede-75656c6a97e1 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:39:13.535: INFO: The status of Pod labelsupdatee6a2840d-7e0d-4ac7-bede-75656c6a97e1 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:39:15.534: INFO: The status of Pod labelsupdatee6a2840d-7e0d-4ac7-bede-75656c6a97e1 is Running (Ready = true) Oct 23 01:39:16.095: INFO: Successfully updated pod "labelsupdatee6a2840d-7e0d-4ac7-bede-75656c6a97e1" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:39:20.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3962" for this suite. • [SLOW TEST:8.642 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":37,"skipped":656,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:39:07.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test override all Oct 23 01:39:07.909: INFO: Waiting up to 5m0s for pod "client-containers-b153011d-947a-4ff5-b4e7-e105ef935e2c" in namespace "containers-2174" to be "Succeeded or Failed" Oct 23 01:39:07.913: INFO: Pod "client-containers-b153011d-947a-4ff5-b4e7-e105ef935e2c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.17375ms Oct 23 01:39:09.916: INFO: Pod "client-containers-b153011d-947a-4ff5-b4e7-e105ef935e2c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007805106s Oct 23 01:39:11.921: INFO: Pod "client-containers-b153011d-947a-4ff5-b4e7-e105ef935e2c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012710308s Oct 23 01:39:13.926: INFO: Pod "client-containers-b153011d-947a-4ff5-b4e7-e105ef935e2c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.017682735s Oct 23 01:39:15.931: INFO: Pod "client-containers-b153011d-947a-4ff5-b4e7-e105ef935e2c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.022256827s Oct 23 01:39:17.935: INFO: Pod "client-containers-b153011d-947a-4ff5-b4e7-e105ef935e2c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.026092786s Oct 23 01:39:19.938: INFO: Pod "client-containers-b153011d-947a-4ff5-b4e7-e105ef935e2c": Phase="Pending", Reason="", readiness=false. Elapsed: 12.029497986s Oct 23 01:39:21.942: INFO: Pod "client-containers-b153011d-947a-4ff5-b4e7-e105ef935e2c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.033368534s STEP: Saw pod success Oct 23 01:39:21.942: INFO: Pod "client-containers-b153011d-947a-4ff5-b4e7-e105ef935e2c" satisfied condition "Succeeded or Failed" Oct 23 01:39:21.944: INFO: Trying to get logs from node node1 pod client-containers-b153011d-947a-4ff5-b4e7-e105ef935e2c container agnhost-container: STEP: delete the pod Oct 23 01:39:21.958: INFO: Waiting for pod client-containers-b153011d-947a-4ff5-b4e7-e105ef935e2c to disappear Oct 23 01:39:21.960: INFO: Pod client-containers-b153011d-947a-4ff5-b4e7-e105ef935e2c no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:39:21.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2174" for this suite. • [SLOW TEST:14.097 seconds] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:39:16.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-upd-f8d4faad-8bdc-4fd2-a8df-09d4d2a714ab STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:39:22.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5313" for this suite. • [SLOW TEST:6.064 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":35,"skipped":661,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:39:22.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:39:22.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2120" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Secrets should patch a secret [Conformance]","total":-1,"completed":36,"skipped":661,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:39:14.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Oct 23 01:39:22.769: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:39:22.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3202" for this suite. • [SLOW TEST:8.081 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":34,"skipped":666,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:38:57.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. Oct 23 01:38:57.243: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:38:59.247: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:39:01.248: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the pod with lifecycle hook Oct 23 01:39:01.264: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:39:03.268: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:39:05.266: INFO: The status of Pod pod-with-prestop-exec-hook is Running (Ready = true) STEP: delete the pod with lifecycle hook Oct 23 01:39:05.275: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Oct 23 01:39:05.277: INFO: Pod pod-with-prestop-exec-hook still exists Oct 23 01:39:07.278: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Oct 23 01:39:07.282: INFO: Pod pod-with-prestop-exec-hook still exists Oct 23 01:39:09.278: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Oct 23 01:39:09.282: INFO: Pod pod-with-prestop-exec-hook still exists Oct 23 01:39:11.279: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Oct 23 01:39:11.282: INFO: Pod pod-with-prestop-exec-hook still exists Oct 23 01:39:13.277: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Oct 23 01:39:13.280: INFO: Pod pod-with-prestop-exec-hook still exists Oct 23 01:39:15.279: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Oct 23 01:39:15.282: INFO: Pod pod-with-prestop-exec-hook still exists Oct 23 01:39:17.279: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Oct 23 01:39:17.282: INFO: Pod pod-with-prestop-exec-hook still exists Oct 23 01:39:19.279: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Oct 23 01:39:19.282: INFO: Pod pod-with-prestop-exec-hook still exists Oct 23 01:39:21.278: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Oct 23 01:39:21.282: INFO: Pod pod-with-prestop-exec-hook still exists Oct 23 01:39:23.278: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Oct 23 01:39:23.281: INFO: Pod pod-with-prestop-exec-hook still exists Oct 23 01:39:25.278: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Oct 23 01:39:25.284: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:39:25.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5033" for this suite. • [SLOW TEST:28.099 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":343,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} S ------------------------------ [BeforeEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:38:58.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostport STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/hostport.go:47 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to create a pod(pod1) with hostport 54323 and hostIP 127.0.0.1 and expect scheduled Oct 23 01:38:58.431: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:39:00.436: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:39:02.438: INFO: The status of Pod pod1 is Running (Ready = true) STEP: Trying to create another pod(pod2) with hostport 54323 but hostIP 10.10.190.207 on the node which pod1 resides and expect scheduled Oct 23 01:39:02.450: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:39:04.456: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:39:06.454: INFO: The status of Pod pod2 is Running (Ready = true) STEP: Trying to create a third pod(pod3) with hostport 54323, hostIP 10.10.190.207 but use UDP protocol on the node which pod2 resides Oct 23 01:39:06.469: INFO: The status of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:39:08.471: INFO: The status of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:39:10.472: INFO: The status of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:39:12.474: INFO: The status of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:39:14.474: INFO: The status of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:39:16.474: INFO: The status of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:39:18.472: INFO: The status of Pod pod3 is Running (Ready = true) Oct 23 01:39:18.485: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:39:20.489: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:39:22.489: INFO: The status of Pod e2e-host-exec is Running (Ready = true) STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54323 Oct 23 01:39:22.491: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 10.10.190.207 http://127.0.0.1:54323/hostname] Namespace:hostport-7453 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 01:39:22.491: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 10.10.190.207, port: 54323 Oct 23 01:39:22.584: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://10.10.190.207:54323/hostname] Namespace:hostport-7453 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 01:39:22.584: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 10.10.190.207, port: 54323 UDP Oct 23 01:39:22.669: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 10.10.190.207 54323] Namespace:hostport-7453 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 01:39:22.669: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:39:27.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostport-7453" for this suite. • [SLOW TEST:29.409 seconds] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]","total":-1,"completed":33,"skipped":535,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:39:22.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars Oct 23 01:39:22.549: INFO: Waiting up to 5m0s for pod "downward-api-65ac5d06-9cad-4d9c-baa0-afab345a17ab" in namespace "downward-api-139" to be "Succeeded or Failed" Oct 23 01:39:22.551: INFO: Pod "downward-api-65ac5d06-9cad-4d9c-baa0-afab345a17ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090627ms Oct 23 01:39:24.555: INFO: Pod "downward-api-65ac5d06-9cad-4d9c-baa0-afab345a17ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006026246s Oct 23 01:39:26.568: INFO: Pod "downward-api-65ac5d06-9cad-4d9c-baa0-afab345a17ab": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018611895s Oct 23 01:39:28.572: INFO: Pod "downward-api-65ac5d06-9cad-4d9c-baa0-afab345a17ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.022779223s STEP: Saw pod success Oct 23 01:39:28.572: INFO: Pod "downward-api-65ac5d06-9cad-4d9c-baa0-afab345a17ab" satisfied condition "Succeeded or Failed" Oct 23 01:39:28.576: INFO: Trying to get logs from node node2 pod downward-api-65ac5d06-9cad-4d9c-baa0-afab345a17ab container dapi-container: STEP: delete the pod Oct 23 01:39:28.605: INFO: Waiting for pod downward-api-65ac5d06-9cad-4d9c-baa0-afab345a17ab to disappear Oct 23 01:39:28.607: INFO: Pod downward-api-65ac5d06-9cad-4d9c-baa0-afab345a17ab no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:39:28.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-139" for this suite. • [SLOW TEST:6.098 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":-1,"completed":37,"skipped":701,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:39:25.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Oct 23 01:39:25.344: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4ba615cf-b8f0-4d8c-8391-bfd40e1b327b" in namespace "downward-api-9513" to be "Succeeded or Failed" Oct 23 01:39:25.346: INFO: Pod "downwardapi-volume-4ba615cf-b8f0-4d8c-8391-bfd40e1b327b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01441ms Oct 23 01:39:27.350: INFO: Pod "downwardapi-volume-4ba615cf-b8f0-4d8c-8391-bfd40e1b327b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005412498s Oct 23 01:39:29.354: INFO: Pod "downwardapi-volume-4ba615cf-b8f0-4d8c-8391-bfd40e1b327b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00995028s STEP: Saw pod success Oct 23 01:39:29.354: INFO: Pod "downwardapi-volume-4ba615cf-b8f0-4d8c-8391-bfd40e1b327b" satisfied condition "Succeeded or Failed" Oct 23 01:39:29.357: INFO: Trying to get logs from node node1 pod downwardapi-volume-4ba615cf-b8f0-4d8c-8391-bfd40e1b327b container client-container: STEP: delete the pod Oct 23 01:39:29.370: INFO: Waiting for pod downwardapi-volume-4ba615cf-b8f0-4d8c-8391-bfd40e1b327b to disappear Oct 23 01:39:29.372: INFO: Pod downwardapi-volume-4ba615cf-b8f0-4d8c-8391-bfd40e1b327b no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:39:29.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9513" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":344,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:39:29.405: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should run through the lifecycle of a ServiceAccount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a ServiceAccount STEP: watching for the ServiceAccount to be added STEP: patching the ServiceAccount STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) STEP: deleting the ServiceAccount [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:39:29.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-3852" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":-1,"completed":25,"skipped":359,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":347,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:39:21.969: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating all guestbook components Oct 23 01:39:21.990: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-replica labels: app: agnhost role: replica tier: backend spec: ports: - port: 6379 selector: app: agnhost role: replica tier: backend Oct 23 01:39:21.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1290 create -f -' Oct 23 01:39:22.397: INFO: stderr: "" Oct 23 01:39:22.398: INFO: stdout: "service/agnhost-replica created\n" Oct 23 01:39:22.398: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-primary labels: app: agnhost role: primary tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: primary tier: backend Oct 23 01:39:22.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1290 create -f -' Oct 23 01:39:22.707: INFO: stderr: "" Oct 23 01:39:22.707: INFO: stdout: "service/agnhost-primary created\n" Oct 23 01:39:22.707: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Oct 23 01:39:22.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1290 create -f -' Oct 23 01:39:23.023: INFO: stderr: "" Oct 23 01:39:23.023: INFO: stdout: "service/frontend created\n" Oct 23 01:39:23.023: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: k8s.gcr.io/e2e-test-images/agnhost:2.32 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Oct 23 01:39:23.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1290 create -f -' Oct 23 01:39:23.343: INFO: stderr: "" Oct 23 01:39:23.343: INFO: stdout: "deployment.apps/frontend created\n" Oct 23 01:39:23.344: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-primary spec: replicas: 1 selector: matchLabels: app: agnhost role: primary tier: backend template: metadata: labels: app: agnhost role: primary tier: backend spec: containers: - name: primary image: k8s.gcr.io/e2e-test-images/agnhost:2.32 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Oct 23 01:39:23.344: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1290 create -f -' Oct 23 01:39:23.643: INFO: stderr: "" Oct 23 01:39:23.643: INFO: stdout: "deployment.apps/agnhost-primary created\n" Oct 23 01:39:23.643: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-replica spec: replicas: 2 selector: matchLabels: app: agnhost role: replica tier: backend template: metadata: labels: app: agnhost role: replica tier: backend spec: containers: - name: replica image: k8s.gcr.io/e2e-test-images/agnhost:2.32 args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Oct 23 01:39:23.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1290 create -f -' Oct 23 01:39:23.981: INFO: stderr: "" Oct 23 01:39:23.981: INFO: stdout: "deployment.apps/agnhost-replica created\n" STEP: validating guestbook app Oct 23 01:39:23.981: INFO: Waiting for all frontend pods to be Running. Oct 23 01:39:34.033: INFO: Waiting for frontend to serve content. Oct 23 01:39:34.039: INFO: Trying to add a new entry to the guestbook. Oct 23 01:39:34.047: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Oct 23 01:39:34.052: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1290 delete --grace-period=0 --force -f -' Oct 23 01:39:34.174: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 23 01:39:34.174: INFO: stdout: "service \"agnhost-replica\" force deleted\n" STEP: using delete to clean up resources Oct 23 01:39:34.174: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1290 delete --grace-period=0 --force -f -' Oct 23 01:39:34.289: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 23 01:39:34.289: INFO: stdout: "service \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Oct 23 01:39:34.289: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1290 delete --grace-period=0 --force -f -' Oct 23 01:39:34.409: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 23 01:39:34.409: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Oct 23 01:39:34.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1290 delete --grace-period=0 --force -f -' Oct 23 01:39:34.526: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 23 01:39:34.526: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Oct 23 01:39:34.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1290 delete --grace-period=0 --force -f -' Oct 23 01:39:34.659: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 23 01:39:34.659: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Oct 23 01:39:34.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1290 delete --grace-period=0 --force -f -' Oct 23 01:39:34.778: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 23 01:39:34.778: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:39:34.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1290" for this suite. • [SLOW TEST:12.821 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:336 should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":-1,"completed":23,"skipped":347,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:39:28.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-1010.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-1010.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1010.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-1010.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-1010.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1010.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 23 01:39:36.693: INFO: DNS probes using dns-1010/dns-test-9ae73472-95b1-49af-b267-905a183472d2 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:39:36.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1010" for this suite. • [SLOW TEST:8.096 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":-1,"completed":38,"skipped":703,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:39:34.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 01:39:34.861: INFO: The status of Pod busybox-host-aliasesc93dafd9-56d5-4d4e-875f-62ae392eda89 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:39:36.863: INFO: The status of Pod busybox-host-aliasesc93dafd9-56d5-4d4e-875f-62ae392eda89 is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:39:38.865: INFO: The status of Pod busybox-host-aliasesc93dafd9-56d5-4d4e-875f-62ae392eda89 is Running (Ready = true) [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:39:38.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5868" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":371,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:39:27.813: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Creating a NodePort Service STEP: Not allowing a LoadBalancer Service with NodePort to be created that exceeds remaining quota STEP: Ensuring resource quota status captures service creation STEP: Deleting Services STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:39:38.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1766" for this suite. • [SLOW TEST:11.096 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":-1,"completed":34,"skipped":544,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:38:43.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod busybox-579e25c2-6f03-4f69-acfa-d1ee66ffb6bd in namespace container-probe-2807 Oct 23 01:38:47.427: INFO: Started pod busybox-579e25c2-6f03-4f69-acfa-d1ee66ffb6bd in namespace container-probe-2807 STEP: checking the pod's current state and verifying that restartCount is present Oct 23 01:38:47.430: INFO: Initial restart count of pod busybox-579e25c2-6f03-4f69-acfa-d1ee66ffb6bd is 0 Oct 23 01:39:39.535: INFO: Restart count of pod container-probe-2807/busybox-579e25c2-6f03-4f69-acfa-d1ee66ffb6bd is now 1 (52.104728374s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:39:39.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2807" for this suite. • [SLOW TEST:56.166 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":614,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:39:22.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:39:39.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7594" for this suite. • [SLOW TEST:17.062 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":-1,"completed":35,"skipped":689,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:39:38.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Oct 23 01:39:38.958: INFO: Waiting up to 5m0s for pod "downwardapi-volume-61be5f48-7aca-4b7b-8050-208422ca27af" in namespace "projected-2131" to be "Succeeded or Failed" Oct 23 01:39:38.960: INFO: Pod "downwardapi-volume-61be5f48-7aca-4b7b-8050-208422ca27af": Phase="Pending", Reason="", readiness=false. Elapsed: 1.952227ms Oct 23 01:39:40.962: INFO: Pod "downwardapi-volume-61be5f48-7aca-4b7b-8050-208422ca27af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004546605s Oct 23 01:39:42.967: INFO: Pod "downwardapi-volume-61be5f48-7aca-4b7b-8050-208422ca27af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009181601s STEP: Saw pod success Oct 23 01:39:42.967: INFO: Pod "downwardapi-volume-61be5f48-7aca-4b7b-8050-208422ca27af" satisfied condition "Succeeded or Failed" Oct 23 01:39:42.971: INFO: Trying to get logs from node node1 pod downwardapi-volume-61be5f48-7aca-4b7b-8050-208422ca27af container client-container: STEP: delete the pod Oct 23 01:39:42.985: INFO: Waiting for pod downwardapi-volume-61be5f48-7aca-4b7b-8050-208422ca27af to disappear Oct 23 01:39:42.987: INFO: Pod downwardapi-volume-61be5f48-7aca-4b7b-8050-208422ca27af no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:39:42.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2131" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":35,"skipped":548,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:39:20.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: set up a multi version CRD Oct 23 01:39:20.231: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:39:43.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5318" for this suite. • [SLOW TEST:23.360 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":-1,"completed":38,"skipped":698,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:39:39.929: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-map-d3e8b561-631a-4f74-8923-f534168365a9 STEP: Creating a pod to test consume secrets Oct 23 01:39:39.963: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0b6b2aab-a823-4309-9e73-59e6474d7285" in namespace "projected-726" to be "Succeeded or Failed" Oct 23 01:39:39.965: INFO: Pod "pod-projected-secrets-0b6b2aab-a823-4309-9e73-59e6474d7285": Phase="Pending", Reason="", readiness=false. Elapsed: 2.241243ms Oct 23 01:39:41.971: INFO: Pod "pod-projected-secrets-0b6b2aab-a823-4309-9e73-59e6474d7285": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007912355s Oct 23 01:39:43.976: INFO: Pod "pod-projected-secrets-0b6b2aab-a823-4309-9e73-59e6474d7285": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012739739s Oct 23 01:39:45.981: INFO: Pod "pod-projected-secrets-0b6b2aab-a823-4309-9e73-59e6474d7285": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.018288598s STEP: Saw pod success Oct 23 01:39:45.982: INFO: Pod "pod-projected-secrets-0b6b2aab-a823-4309-9e73-59e6474d7285" satisfied condition "Succeeded or Failed" Oct 23 01:39:45.984: INFO: Trying to get logs from node node1 pod pod-projected-secrets-0b6b2aab-a823-4309-9e73-59e6474d7285 container projected-secret-volume-test: STEP: delete the pod Oct 23 01:39:46.009: INFO: Waiting for pod pod-projected-secrets-0b6b2aab-a823-4309-9e73-59e6474d7285 to disappear Oct 23 01:39:46.011: INFO: Pod pod-projected-secrets-0b6b2aab-a823-4309-9e73-59e6474d7285 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:39:46.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-726" for this suite. • [SLOW TEST:6.091 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":36,"skipped":713,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:39:43.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-bdeea628-0d8e-4afb-b768-95e180f2fc3d STEP: Creating a pod to test consume configMaps Oct 23 01:39:43.152: INFO: Waiting up to 5m0s for pod "pod-configmaps-52fc4c8d-ed4d-4237-8be8-527fb0da7726" in namespace "configmap-754" to be "Succeeded or Failed" Oct 23 01:39:43.154: INFO: Pod "pod-configmaps-52fc4c8d-ed4d-4237-8be8-527fb0da7726": Phase="Pending", Reason="", readiness=false. Elapsed: 1.995019ms Oct 23 01:39:45.159: INFO: Pod "pod-configmaps-52fc4c8d-ed4d-4237-8be8-527fb0da7726": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006597399s Oct 23 01:39:47.166: INFO: Pod "pod-configmaps-52fc4c8d-ed4d-4237-8be8-527fb0da7726": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013284911s STEP: Saw pod success Oct 23 01:39:47.166: INFO: Pod "pod-configmaps-52fc4c8d-ed4d-4237-8be8-527fb0da7726" satisfied condition "Succeeded or Failed" Oct 23 01:39:47.168: INFO: Trying to get logs from node node1 pod pod-configmaps-52fc4c8d-ed4d-4237-8be8-527fb0da7726 container agnhost-container: STEP: delete the pod Oct 23 01:39:47.196: INFO: Waiting for pod pod-configmaps-52fc4c8d-ed4d-4237-8be8-527fb0da7726 to disappear Oct 23 01:39:47.197: INFO: Pod pod-configmaps-52fc4c8d-ed4d-4237-8be8-527fb0da7726 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:39:47.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-754" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":36,"skipped":623,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:39:36.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Oct 23 01:39:36.782: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0b3865ed-16fd-4aab-aae1-15f4b2ae8dd9" in namespace "downward-api-3945" to be "Succeeded or Failed" Oct 23 01:39:36.786: INFO: Pod "downwardapi-volume-0b3865ed-16fd-4aab-aae1-15f4b2ae8dd9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023057ms Oct 23 01:39:38.789: INFO: Pod "downwardapi-volume-0b3865ed-16fd-4aab-aae1-15f4b2ae8dd9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006941635s Oct 23 01:39:40.793: INFO: Pod "downwardapi-volume-0b3865ed-16fd-4aab-aae1-15f4b2ae8dd9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010920774s Oct 23 01:39:42.796: INFO: Pod "downwardapi-volume-0b3865ed-16fd-4aab-aae1-15f4b2ae8dd9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013938562s Oct 23 01:39:44.799: INFO: Pod "downwardapi-volume-0b3865ed-16fd-4aab-aae1-15f4b2ae8dd9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.017426182s Oct 23 01:39:46.808: INFO: Pod "downwardapi-volume-0b3865ed-16fd-4aab-aae1-15f4b2ae8dd9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.026007093s Oct 23 01:39:48.812: INFO: Pod "downwardapi-volume-0b3865ed-16fd-4aab-aae1-15f4b2ae8dd9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.030233192s STEP: Saw pod success Oct 23 01:39:48.812: INFO: Pod "downwardapi-volume-0b3865ed-16fd-4aab-aae1-15f4b2ae8dd9" satisfied condition "Succeeded or Failed" Oct 23 01:39:48.815: INFO: Trying to get logs from node node2 pod downwardapi-volume-0b3865ed-16fd-4aab-aae1-15f4b2ae8dd9 container client-container: STEP: delete the pod Oct 23 01:39:48.827: INFO: Waiting for pod downwardapi-volume-0b3865ed-16fd-4aab-aae1-15f4b2ae8dd9 to disappear Oct 23 01:39:48.830: INFO: Pod downwardapi-volume-0b3865ed-16fd-4aab-aae1-15f4b2ae8dd9 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:39:48.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3945" for this suite. • [SLOW TEST:12.089 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":39,"skipped":723,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:39:38.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Oct 23 01:39:39.007: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5597 c147f701-d3af-4385-b1b9-1a11c1485821 102619 0 2021-10-23 01:39:38 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-10-23 01:39:38 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Oct 23 01:39:39.007: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5597 c147f701-d3af-4385-b1b9-1a11c1485821 102620 0 2021-10-23 01:39:38 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-10-23 01:39:38 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Oct 23 01:39:39.008: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5597 c147f701-d3af-4385-b1b9-1a11c1485821 102621 0 2021-10-23 01:39:38 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-10-23 01:39:38 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Oct 23 01:39:49.026: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5597 c147f701-d3af-4385-b1b9-1a11c1485821 102973 0 2021-10-23 01:39:38 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-10-23 01:39:38 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Oct 23 01:39:49.027: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5597 c147f701-d3af-4385-b1b9-1a11c1485821 102974 0 2021-10-23 01:39:38 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-10-23 01:39:38 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Oct 23 01:39:49.027: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5597 c147f701-d3af-4385-b1b9-1a11c1485821 102975 0 2021-10-23 01:39:38 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-10-23 01:39:38 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:39:49.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5597" for this suite. • [SLOW TEST:10.061 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":-1,"completed":25,"skipped":421,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} S ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:39:29.493: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] should run the lifecycle of a Deployment [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Deployment STEP: waiting for Deployment to be created STEP: waiting for all Replicas to be Ready Oct 23 01:39:29.523: INFO: observed Deployment test-deployment in namespace deployment-9330 with ReadyReplicas 0 and labels map[test-deployment-static:true] Oct 23 01:39:29.523: INFO: observed Deployment test-deployment in namespace deployment-9330 with ReadyReplicas 0 and labels map[test-deployment-static:true] Oct 23 01:39:29.527: INFO: observed Deployment test-deployment in namespace deployment-9330 with ReadyReplicas 0 and labels map[test-deployment-static:true] Oct 23 01:39:29.527: INFO: observed Deployment test-deployment in namespace deployment-9330 with ReadyReplicas 0 and labels map[test-deployment-static:true] Oct 23 01:39:29.534: INFO: observed Deployment test-deployment in namespace deployment-9330 with ReadyReplicas 0 and labels map[test-deployment-static:true] Oct 23 01:39:29.534: INFO: observed Deployment test-deployment in namespace deployment-9330 with ReadyReplicas 0 and labels map[test-deployment-static:true] Oct 23 01:39:29.549: INFO: observed Deployment test-deployment in namespace deployment-9330 with ReadyReplicas 0 and labels map[test-deployment-static:true] Oct 23 01:39:29.549: INFO: observed Deployment test-deployment in namespace deployment-9330 with ReadyReplicas 0 and labels map[test-deployment-static:true] Oct 23 01:39:32.699: INFO: observed Deployment test-deployment in namespace deployment-9330 with ReadyReplicas 1 and labels map[test-deployment-static:true] Oct 23 01:39:32.699: INFO: observed Deployment test-deployment in namespace deployment-9330 with ReadyReplicas 1 and labels map[test-deployment-static:true] Oct 23 01:39:33.647: INFO: observed Deployment test-deployment in namespace deployment-9330 with ReadyReplicas 2 and labels map[test-deployment-static:true] STEP: patching the Deployment Oct 23 01:39:33.653: INFO: observed event type ADDED STEP: waiting for Replicas to scale Oct 23 01:39:33.654: INFO: observed Deployment test-deployment in namespace deployment-9330 with ReadyReplicas 0 Oct 23 01:39:33.654: INFO: observed Deployment test-deployment in namespace deployment-9330 with ReadyReplicas 0 Oct 23 01:39:33.654: INFO: observed Deployment test-deployment in namespace deployment-9330 with ReadyReplicas 0 Oct 23 01:39:33.654: INFO: observed Deployment test-deployment in namespace deployment-9330 with ReadyReplicas 0 Oct 23 01:39:33.654: INFO: observed Deployment test-deployment in namespace deployment-9330 with ReadyReplicas 0 Oct 23 01:39:33.654: INFO: observed Deployment test-deployment in namespace deployment-9330 with ReadyReplicas 0 Oct 23 01:39:33.655: INFO: observed Deployment test-deployment in namespace deployment-9330 with ReadyReplicas 0 Oct 23 01:39:33.655: INFO: observed Deployment test-deployment in namespace deployment-9330 with ReadyReplicas 0 Oct 23 01:39:33.655: INFO: observed Deployment test-deployment in namespace deployment-9330 with ReadyReplicas 1 Oct 23 01:39:33.655: INFO: observed Deployment test-deployment in namespace deployment-9330 with ReadyReplicas 1 Oct 23 01:39:33.655: INFO: observed Deployment test-deployment in namespace deployment-9330 with ReadyReplicas 2 Oct 23 01:39:33.655: INFO: observed Deployment test-deployment in namespace deployment-9330 with ReadyReplicas 2 Oct 23 01:39:33.655: INFO: observed Deployment test-deployment in namespace deployment-9330 with ReadyReplicas 2 Oct 23 01:39:33.655: INFO: observed Deployment test-deployment in namespace deployment-9330 with ReadyReplicas 2 Oct 23 01:39:33.659: INFO: observed Deployment test-deployment in namespace deployment-9330 with ReadyReplicas 2 Oct 23 01:39:33.659: INFO: observed Deployment test-deployment in namespace deployment-9330 with ReadyReplicas 2 Oct 23 01:39:33.666: INFO: observed Deployment test-deployment in namespace deployment-9330 with ReadyReplicas 2 Oct 23 01:39:33.666: INFO: observed Deployment test-deployment in namespace deployment-9330 with ReadyReplicas 2 Oct 23 01:39:33.672: INFO: observed Deployment test-deployment in namespace deployment-9330 with ReadyReplicas 1 Oct 23 01:39:33.672: INFO: observed Deployment test-deployment in namespace deployment-9330 with ReadyReplicas 1 Oct 23 01:39:33.679: INFO: observed Deployment test-deployment in namespace deployment-9330 with ReadyReplicas 1 Oct 23 01:39:33.679: INFO: observed Deployment test-deployment in namespace deployment-9330 with ReadyReplicas 1 Oct 23 01:39:38.090: INFO: observed Deployment test-deployment in namespace deployment-9330 with ReadyReplicas 2 Oct 23 01:39:38.090: INFO: observed Deployment test-deployment in namespace deployment-9330 with ReadyReplicas 2 Oct 23 01:39:38.101: INFO: observed Deployment test-deployment in namespace deployment-9330 with ReadyReplicas 1 STEP: listing Deployments Oct 23 01:39:38.105: INFO: Found test-deployment with labels: map[test-deployment:patched test-deployment-static:true] STEP: updating the Deployment Oct 23 01:39:38.116: INFO: observed Deployment test-deployment in namespace deployment-9330 with ReadyReplicas 1 STEP: fetching the DeploymentStatus Oct 23 01:39:38.122: INFO: observed Deployment test-deployment in namespace deployment-9330 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Oct 23 01:39:38.122: INFO: observed Deployment test-deployment in namespace deployment-9330 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Oct 23 01:39:38.127: INFO: observed Deployment test-deployment in namespace deployment-9330 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Oct 23 01:39:38.134: INFO: observed Deployment test-deployment in namespace deployment-9330 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Oct 23 01:39:38.138: INFO: observed Deployment test-deployment in namespace deployment-9330 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Oct 23 01:39:42.697: INFO: observed Deployment test-deployment in namespace deployment-9330 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Oct 23 01:39:42.704: INFO: observed Deployment test-deployment in namespace deployment-9330 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Oct 23 01:39:42.708: INFO: observed Deployment test-deployment in namespace deployment-9330 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Oct 23 01:39:42.715: INFO: observed Deployment test-deployment in namespace deployment-9330 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Oct 23 01:39:42.719: INFO: observed Deployment test-deployment in namespace deployment-9330 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Oct 23 01:39:50.038: INFO: observed Deployment test-deployment in namespace deployment-9330 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] STEP: patching the DeploymentStatus STEP: fetching the DeploymentStatus Oct 23 01:39:50.063: INFO: observed Deployment test-deployment in namespace deployment-9330 with ReadyReplicas 1 Oct 23 01:39:50.063: INFO: observed Deployment test-deployment in namespace deployment-9330 with ReadyReplicas 1 Oct 23 01:39:50.063: INFO: observed Deployment test-deployment in namespace deployment-9330 with ReadyReplicas 1 Oct 23 01:39:50.063: INFO: observed Deployment test-deployment in namespace deployment-9330 with ReadyReplicas 1 Oct 23 01:39:50.063: INFO: observed Deployment test-deployment in namespace deployment-9330 with ReadyReplicas 1 Oct 23 01:39:50.063: INFO: observed Deployment test-deployment in namespace deployment-9330 with ReadyReplicas 2 Oct 23 01:39:50.063: INFO: observed Deployment test-deployment in namespace deployment-9330 with ReadyReplicas 2 Oct 23 01:39:50.063: INFO: observed Deployment test-deployment in namespace deployment-9330 with ReadyReplicas 2 Oct 23 01:39:50.063: INFO: observed Deployment test-deployment in namespace deployment-9330 with ReadyReplicas 2 Oct 23 01:39:50.063: INFO: observed Deployment test-deployment in namespace deployment-9330 with ReadyReplicas 2 Oct 23 01:39:50.063: INFO: observed Deployment test-deployment in namespace deployment-9330 with ReadyReplicas 3 STEP: deleting the Deployment Oct 23 01:39:50.070: INFO: observed event type MODIFIED Oct 23 01:39:50.070: INFO: observed event type MODIFIED Oct 23 01:39:50.070: INFO: observed event type MODIFIED Oct 23 01:39:50.070: INFO: observed event type MODIFIED Oct 23 01:39:50.070: INFO: observed event type MODIFIED Oct 23 01:39:50.070: INFO: observed event type MODIFIED Oct 23 01:39:50.070: INFO: observed event type MODIFIED Oct 23 01:39:50.070: INFO: observed event type MODIFIED Oct 23 01:39:50.070: INFO: observed event type MODIFIED Oct 23 01:39:50.070: INFO: observed event type MODIFIED Oct 23 01:39:50.070: INFO: observed event type MODIFIED Oct 23 01:39:50.070: INFO: observed event type MODIFIED [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Oct 23 01:39:50.073: INFO: Log out all the ReplicaSets if there is no deployment created [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:39:50.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9330" for this suite. • [SLOW TEST:20.589 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run the lifecycle of a Deployment [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":-1,"completed":26,"skipped":386,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:39:39.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating the pod Oct 23 01:39:39.624: INFO: The status of Pod labelsupdatea60ac89d-bbcb-4c8f-b14f-871e122c148b is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:39:41.628: INFO: The status of Pod labelsupdatea60ac89d-bbcb-4c8f-b14f-871e122c148b is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:39:43.627: INFO: The status of Pod labelsupdatea60ac89d-bbcb-4c8f-b14f-871e122c148b is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:39:45.627: INFO: The status of Pod labelsupdatea60ac89d-bbcb-4c8f-b14f-871e122c148b is Pending, waiting for it to be Running (with Ready = true) Oct 23 01:39:47.630: INFO: The status of Pod labelsupdatea60ac89d-bbcb-4c8f-b14f-871e122c148b is Running (Ready = true) Oct 23 01:39:48.147: INFO: Successfully updated pod "labelsupdatea60ac89d-bbcb-4c8f-b14f-871e122c148b" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:39:52.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9961" for this suite. • [SLOW TEST:12.608 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":630,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:39:52.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be immutable if `immutable` field is set [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:39:52.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1332" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":33,"skipped":637,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:39:43.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Oct 23 01:39:50.133: INFO: Successfully updated pod "adopt-release-wzv2m" STEP: Checking that the Job readopts the Pod Oct 23 01:39:50.133: INFO: Waiting up to 15m0s for pod "adopt-release-wzv2m" in namespace "job-918" to be "adopted" Oct 23 01:39:50.137: INFO: Pod "adopt-release-wzv2m": Phase="Running", Reason="", readiness=true. Elapsed: 3.393072ms Oct 23 01:39:52.140: INFO: Pod "adopt-release-wzv2m": Phase="Running", Reason="", readiness=true. Elapsed: 2.007049339s Oct 23 01:39:52.140: INFO: Pod "adopt-release-wzv2m" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Oct 23 01:39:52.650: INFO: Successfully updated pod "adopt-release-wzv2m" STEP: Checking that the Job releases the Pod Oct 23 01:39:52.650: INFO: Waiting up to 15m0s for pod "adopt-release-wzv2m" in namespace "job-918" to be "released" Oct 23 01:39:52.653: INFO: Pod "adopt-release-wzv2m": Phase="Running", Reason="", readiness=true. Elapsed: 2.232477ms Oct 23 01:39:54.657: INFO: Pod "adopt-release-wzv2m": Phase="Running", Reason="", readiness=true. Elapsed: 2.006279741s Oct 23 01:39:54.657: INFO: Pod "adopt-release-wzv2m" satisfied condition "released" [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:39:54.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-918" for this suite. • [SLOW TEST:11.075 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":39,"skipped":709,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSS ------------------------------ [BeforeEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:39:48.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslicemirroring STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslicemirroring.go:39 [It] should mirror a custom Endpoints resource through create update and delete [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: mirroring a new custom Endpoint Oct 23 01:39:48.915: INFO: Waiting for at least 1 EndpointSlice to exist, got 0 STEP: mirroring an update to a custom Endpoint Oct 23 01:39:50.925: INFO: Expected EndpointSlice to have 10.2.3.4 as address, got 10.1.2.3 STEP: mirroring deletion of a custom Endpoint Oct 23 01:39:52.936: INFO: Waiting for 0 EndpointSlices to exist, got 1 [AfterEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:39:54.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslicemirroring-3674" for this suite. • [SLOW TEST:6.069 seconds] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should mirror a custom Endpoints resource through create update and delete [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","total":-1,"completed":40,"skipped":745,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:39:49.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 01:39:49.071: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-18f9d51d-5a8f-4f38-a5b7-686f7ef548c0" in namespace "security-context-test-5510" to be "Succeeded or Failed" Oct 23 01:39:49.073: INFO: Pod "busybox-readonly-false-18f9d51d-5a8f-4f38-a5b7-686f7ef548c0": Phase="Pending", Reason="", readiness=false. Elapsed: 1.800372ms Oct 23 01:39:51.076: INFO: Pod "busybox-readonly-false-18f9d51d-5a8f-4f38-a5b7-686f7ef548c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00500062s Oct 23 01:39:53.080: INFO: Pod "busybox-readonly-false-18f9d51d-5a8f-4f38-a5b7-686f7ef548c0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008591801s Oct 23 01:39:55.083: INFO: Pod "busybox-readonly-false-18f9d51d-5a8f-4f38-a5b7-686f7ef548c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.011517403s Oct 23 01:39:55.083: INFO: Pod "busybox-readonly-false-18f9d51d-5a8f-4f38-a5b7-686f7ef548c0" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:39:55.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5510" for this suite. • [SLOW TEST:6.051 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a pod with readOnlyRootFilesystem /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:171 should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":422,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ Oct 23 01:39:55.119: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:39:46.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] Replace and Patch tests [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 01:39:46.078: INFO: Pod name sample-pod: Found 0 pods out of 1 Oct 23 01:39:51.082: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running STEP: Scaling up "test-rs" replicaset Oct 23 01:39:51.091: INFO: Updating replica set "test-rs" STEP: patching the ReplicaSet Oct 23 01:39:51.100: INFO: observed ReplicaSet test-rs in namespace replicaset-8419 with ReadyReplicas 1, AvailableReplicas 1 Oct 23 01:39:51.107: INFO: observed ReplicaSet test-rs in namespace replicaset-8419 with ReadyReplicas 1, AvailableReplicas 1 Oct 23 01:39:51.117: INFO: observed ReplicaSet test-rs in namespace replicaset-8419 with ReadyReplicas 1, AvailableReplicas 1 Oct 23 01:39:51.120: INFO: observed ReplicaSet test-rs in namespace replicaset-8419 with ReadyReplicas 1, AvailableReplicas 1 Oct 23 01:39:57.550: INFO: observed ReplicaSet test-rs in namespace replicaset-8419 with ReadyReplicas 2, AvailableReplicas 2 Oct 23 01:39:57.559: INFO: observed Replicaset test-rs in namespace replicaset-8419 with ReadyReplicas 3 found true [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:39:57.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-8419" for this suite. • [SLOW TEST:11.522 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Replace and Patch tests [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet Replace and Patch tests [Conformance]","total":-1,"completed":37,"skipped":725,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]} Oct 23 01:39:57.569: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:39:52.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod Oct 23 01:39:52.318: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:39:59.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5265" for this suite. • [SLOW TEST:7.613 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:39:50.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Oct 23 01:39:50.121: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying pod deletion was observed [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:40:04.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3578" for this suite. • [SLOW TEST:14.163 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":392,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} Oct 23 01:40:04.262: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:39:54.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:77 Oct 23 01:39:54.708: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the sample API server. Oct 23 01:39:55.088: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Oct 23 01:39:57.113: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549995, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549995, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549995, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549995, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 23 01:39:59.116: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549995, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549995, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549995, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549995, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 23 01:40:01.120: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549995, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549995, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549995, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549995, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 23 01:40:03.116: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549995, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549995, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549995, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549995, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 23 01:40:05.117: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549995, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549995, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549995, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549995, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 23 01:40:07.118: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549995, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549995, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549995, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770549995, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 23 01:40:09.938: INFO: Waited 814.163145ms for the sample-apiserver to be ready to handle requests. STEP: Read Status for v1alpha1.wardle.example.com STEP: kubectl patch apiservice v1alpha1.wardle.example.com -p '{"spec":{"versionPriority": 400}}' STEP: List APIServices Oct 23 01:40:10.347: INFO: Found v1alpha1.wardle.example.com in APIServiceList [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:68 [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:40:11.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-9656" for this suite. • [SLOW TEST:16.556 seconds] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":40,"skipped":717,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} Oct 23 01:40:11.241: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:39:47.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-secret-xzl6 STEP: Creating a pod to test atomic-volume-subpath Oct 23 01:39:47.303: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-xzl6" in namespace "subpath-2208" to be "Succeeded or Failed" Oct 23 01:39:47.308: INFO: Pod "pod-subpath-test-secret-xzl6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.793983ms Oct 23 01:39:49.311: INFO: Pod "pod-subpath-test-secret-xzl6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008115599s Oct 23 01:39:51.318: INFO: Pod "pod-subpath-test-secret-xzl6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014830668s Oct 23 01:39:53.323: INFO: Pod "pod-subpath-test-secret-xzl6": Phase="Running", Reason="", readiness=true. Elapsed: 6.019444661s Oct 23 01:39:55.326: INFO: Pod "pod-subpath-test-secret-xzl6": Phase="Running", Reason="", readiness=true. Elapsed: 8.022769978s Oct 23 01:39:57.330: INFO: Pod "pod-subpath-test-secret-xzl6": Phase="Running", Reason="", readiness=true. Elapsed: 10.026697484s Oct 23 01:39:59.334: INFO: Pod "pod-subpath-test-secret-xzl6": Phase="Running", Reason="", readiness=true. Elapsed: 12.030755113s Oct 23 01:40:01.339: INFO: Pod "pod-subpath-test-secret-xzl6": Phase="Running", Reason="", readiness=true. Elapsed: 14.035419963s Oct 23 01:40:03.342: INFO: Pod "pod-subpath-test-secret-xzl6": Phase="Running", Reason="", readiness=true. Elapsed: 16.038589458s Oct 23 01:40:05.345: INFO: Pod "pod-subpath-test-secret-xzl6": Phase="Running", Reason="", readiness=true. Elapsed: 18.041455897s Oct 23 01:40:07.350: INFO: Pod "pod-subpath-test-secret-xzl6": Phase="Running", Reason="", readiness=true. Elapsed: 20.046650766s Oct 23 01:40:09.353: INFO: Pod "pod-subpath-test-secret-xzl6": Phase="Running", Reason="", readiness=true. Elapsed: 22.04957271s Oct 23 01:40:11.357: INFO: Pod "pod-subpath-test-secret-xzl6": Phase="Running", Reason="", readiness=true. Elapsed: 24.053334012s Oct 23 01:40:13.359: INFO: Pod "pod-subpath-test-secret-xzl6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.056110864s STEP: Saw pod success Oct 23 01:40:13.359: INFO: Pod "pod-subpath-test-secret-xzl6" satisfied condition "Succeeded or Failed" Oct 23 01:40:13.362: INFO: Trying to get logs from node node2 pod pod-subpath-test-secret-xzl6 container test-container-subpath-secret-xzl6: STEP: delete the pod Oct 23 01:40:13.375: INFO: Waiting for pod pod-subpath-test-secret-xzl6 to disappear Oct 23 01:40:13.377: INFO: Pod pod-subpath-test-secret-xzl6 no longer exists STEP: Deleting pod pod-subpath-test-secret-xzl6 Oct 23 01:40:13.377: INFO: Deleting pod "pod-subpath-test-secret-xzl6" in namespace "subpath-2208" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:40:13.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2208" for this suite. • [SLOW TEST:26.126 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":-1,"completed":37,"skipped":654,"failed":0} Oct 23 01:40:13.389: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:37:31.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-7351 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating stateful set ss in namespace statefulset-7351 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7351 Oct 23 01:37:31.475: INFO: Found 0 stateful pods, waiting for 1 Oct 23 01:37:41.481: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Oct 23 01:37:41.484: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7351 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 23 01:37:41.754: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Oct 23 01:37:41.754: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 23 01:37:41.754: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 23 01:37:41.757: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Oct 23 01:37:51.763: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Oct 23 01:37:51.763: INFO: Waiting for statefulset status.replicas updated to 0 Oct 23 01:37:51.776: INFO: POD NODE PHASE GRACE CONDITIONS Oct 23 01:37:51.776: INFO: ss-0 node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:37:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:37:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:37:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:37:31 +0000 UTC }] Oct 23 01:37:51.776: INFO: Oct 23 01:37:51.776: INFO: StatefulSet ss has not reached scale 3, at 1 Oct 23 01:37:52.780: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.997217671s Oct 23 01:37:53.786: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.99209389s Oct 23 01:37:54.790: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.986813842s Oct 23 01:37:55.795: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.982778645s Oct 23 01:37:56.798: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.978450395s Oct 23 01:37:57.802: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.97479429s Oct 23 01:37:58.806: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.97055321s Oct 23 01:37:59.811: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.96702351s Oct 23 01:38:00.816: INFO: Verifying statefulset ss doesn't scale past 3 for another 961.986125ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7351 Oct 23 01:38:01.820: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7351 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 01:38:02.082: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Oct 23 01:38:02.082: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 23 01:38:02.082: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 23 01:38:02.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7351 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 01:38:02.449: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Oct 23 01:38:02.449: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 23 01:38:02.449: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 23 01:38:02.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7351 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 01:38:07.676: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Oct 23 01:38:07.676: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 23 01:38:07.676: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 23 01:38:07.679: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Oct 23 01:38:07.679: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Oct 23 01:38:07.679: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Oct 23 01:38:07.683: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7351 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 23 01:38:07.968: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Oct 23 01:38:07.968: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 23 01:38:07.968: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 23 01:38:07.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7351 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 23 01:38:08.218: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Oct 23 01:38:08.218: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 23 01:38:08.219: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 23 01:38:08.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7351 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 23 01:38:08.490: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Oct 23 01:38:08.490: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 23 01:38:08.491: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 23 01:38:08.491: INFO: Waiting for statefulset status.replicas updated to 0 Oct 23 01:38:08.494: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Oct 23 01:38:18.501: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Oct 23 01:38:18.501: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Oct 23 01:38:18.501: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Oct 23 01:38:18.509: INFO: POD NODE PHASE GRACE CONDITIONS Oct 23 01:38:18.509: INFO: ss-0 node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:37:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:38:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:38:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:37:31 +0000 UTC }] Oct 23 01:38:18.509: INFO: ss-1 node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:37:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:38:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:38:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:37:51 +0000 UTC }] Oct 23 01:38:18.509: INFO: ss-2 node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:37:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:38:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:38:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:37:51 +0000 UTC }] Oct 23 01:38:18.509: INFO: Oct 23 01:38:18.509: INFO: StatefulSet ss has not reached scale 0, at 3 Oct 23 01:38:19.518: INFO: POD NODE PHASE GRACE CONDITIONS Oct 23 01:38:19.518: INFO: ss-0 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:37:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:38:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:38:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:37:31 +0000 UTC }] Oct 23 01:38:19.518: INFO: ss-1 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:37:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:38:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:38:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:37:51 +0000 UTC }] Oct 23 01:38:19.518: INFO: ss-2 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:37:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:38:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:38:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:37:51 +0000 UTC }] Oct 23 01:38:19.518: INFO: Oct 23 01:38:19.518: INFO: StatefulSet ss has not reached scale 0, at 3 Oct 23 01:38:20.522: INFO: POD NODE PHASE GRACE CONDITIONS Oct 23 01:38:20.523: INFO: ss-0 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:37:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:38:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:38:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:37:31 +0000 UTC }] Oct 23 01:38:20.523: INFO: ss-1 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:37:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:38:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:38:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:37:51 +0000 UTC }] Oct 23 01:38:20.523: INFO: ss-2 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:37:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:38:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:38:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:37:51 +0000 UTC }] Oct 23 01:38:20.523: INFO: Oct 23 01:38:20.523: INFO: StatefulSet ss has not reached scale 0, at 3 Oct 23 01:38:21.527: INFO: POD NODE PHASE GRACE CONDITIONS Oct 23 01:38:21.527: INFO: ss-0 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:37:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:38:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:38:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:37:31 +0000 UTC }] Oct 23 01:38:21.527: INFO: ss-1 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:37:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:38:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:38:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:37:51 +0000 UTC }] Oct 23 01:38:21.527: INFO: ss-2 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:37:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:38:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:38:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:37:51 +0000 UTC }] Oct 23 01:38:21.527: INFO: Oct 23 01:38:21.527: INFO: StatefulSet ss has not reached scale 0, at 3 Oct 23 01:38:22.531: INFO: POD NODE PHASE GRACE CONDITIONS Oct 23 01:38:22.531: INFO: ss-0 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:37:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:38:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:38:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:37:31 +0000 UTC }] Oct 23 01:38:22.532: INFO: ss-1 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:37:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:38:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:38:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:37:51 +0000 UTC }] Oct 23 01:38:22.532: INFO: ss-2 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:37:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:38:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:38:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:37:51 +0000 UTC }] Oct 23 01:38:22.532: INFO: Oct 23 01:38:22.532: INFO: StatefulSet ss has not reached scale 0, at 3 Oct 23 01:38:23.536: INFO: POD NODE PHASE GRACE CONDITIONS Oct 23 01:38:23.536: INFO: ss-0 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:37:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:38:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:38:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:37:31 +0000 UTC }] Oct 23 01:38:23.536: INFO: ss-1 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:37:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:38:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:38:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:37:51 +0000 UTC }] Oct 23 01:38:23.536: INFO: ss-2 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:37:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:38:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:38:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:37:51 +0000 UTC }] Oct 23 01:38:23.536: INFO: Oct 23 01:38:23.536: INFO: StatefulSet ss has not reached scale 0, at 3 Oct 23 01:38:24.540: INFO: POD NODE PHASE GRACE CONDITIONS Oct 23 01:38:24.540: INFO: ss-0 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:37:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:38:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:38:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:37:31 +0000 UTC }] Oct 23 01:38:24.540: INFO: ss-1 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:37:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:38:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:38:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:37:51 +0000 UTC }] Oct 23 01:38:24.541: INFO: Oct 23 01:38:24.541: INFO: StatefulSet ss has not reached scale 0, at 2 Oct 23 01:38:25.544: INFO: POD NODE PHASE GRACE CONDITIONS Oct 23 01:38:25.544: INFO: ss-0 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:37:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:38:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:38:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:37:31 +0000 UTC }] Oct 23 01:38:25.544: INFO: ss-1 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:37:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:38:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:38:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:37:51 +0000 UTC }] Oct 23 01:38:25.544: INFO: Oct 23 01:38:25.544: INFO: StatefulSet ss has not reached scale 0, at 2 Oct 23 01:38:26.548: INFO: POD NODE PHASE GRACE CONDITIONS Oct 23 01:38:26.548: INFO: ss-0 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:37:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:38:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:38:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:37:31 +0000 UTC }] Oct 23 01:38:26.548: INFO: ss-1 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:37:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:38:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:38:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:37:51 +0000 UTC }] Oct 23 01:38:26.548: INFO: Oct 23 01:38:26.548: INFO: StatefulSet ss has not reached scale 0, at 2 Oct 23 01:38:27.551: INFO: POD NODE PHASE GRACE CONDITIONS Oct 23 01:38:27.551: INFO: ss-1 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:37:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:38:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:38:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 01:37:51 +0000 UTC }] Oct 23 01:38:27.551: INFO: Oct 23 01:38:27.551: INFO: StatefulSet ss has not reached scale 0, at 1 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7351 Oct 23 01:38:28.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7351 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 01:38:28.735: INFO: rc: 1 Oct 23 01:38:28.736: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7351 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Oct 23 01:38:38.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7351 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 01:38:38.889: INFO: rc: 1 Oct 23 01:38:38.889: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7351 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 23 01:38:48.891: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7351 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 01:38:49.036: INFO: rc: 1 Oct 23 01:38:49.037: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7351 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 23 01:38:59.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7351 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 01:38:59.193: INFO: rc: 1 Oct 23 01:38:59.193: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7351 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 23 01:39:09.194: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7351 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 01:39:09.355: INFO: rc: 1 Oct 23 01:39:09.355: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7351 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 23 01:39:19.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7351 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 01:39:19.507: INFO: rc: 1 Oct 23 01:39:19.507: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7351 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 23 01:39:29.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7351 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 01:39:29.667: INFO: rc: 1 Oct 23 01:39:29.667: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7351 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 23 01:39:39.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7351 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 01:39:39.795: INFO: rc: 1 Oct 23 01:39:39.795: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7351 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 23 01:39:49.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7351 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 01:39:49.952: INFO: rc: 1 Oct 23 01:39:49.952: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7351 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 23 01:39:59.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7351 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 01:40:00.102: INFO: rc: 1 Oct 23 01:40:00.102: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7351 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 23 01:40:10.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7351 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 01:40:10.244: INFO: rc: 1 Oct 23 01:40:10.244: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7351 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 23 01:40:20.245: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7351 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 01:40:20.396: INFO: rc: 1 Oct 23 01:40:20.396: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7351 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 23 01:40:30.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7351 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 01:40:30.565: INFO: rc: 1 Oct 23 01:40:30.565: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7351 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 23 01:40:40.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7351 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 01:40:40.721: INFO: rc: 1 Oct 23 01:40:40.721: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7351 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 23 01:40:50.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7351 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 01:40:50.884: INFO: rc: 1 Oct 23 01:40:50.884: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7351 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 23 01:41:00.886: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7351 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 01:41:01.024: INFO: rc: 1 Oct 23 01:41:01.024: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7351 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 23 01:41:11.026: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7351 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 01:41:11.173: INFO: rc: 1 Oct 23 01:41:11.174: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7351 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 23 01:41:21.175: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7351 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 01:41:21.327: INFO: rc: 1 Oct 23 01:41:21.327: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7351 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 23 01:41:31.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7351 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 01:41:31.478: INFO: rc: 1 Oct 23 01:41:31.478: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7351 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 23 01:41:41.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7351 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 01:41:41.617: INFO: rc: 1 Oct 23 01:41:41.617: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7351 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 23 01:41:51.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7351 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 01:41:51.776: INFO: rc: 1 Oct 23 01:41:51.776: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7351 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 23 01:42:01.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7351 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 01:42:01.930: INFO: rc: 1 Oct 23 01:42:01.930: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7351 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 23 01:42:11.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7351 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 01:42:12.089: INFO: rc: 1 Oct 23 01:42:12.089: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7351 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 23 01:42:22.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7351 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 01:42:22.251: INFO: rc: 1 Oct 23 01:42:22.252: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7351 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 23 01:42:32.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7351 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 01:42:32.408: INFO: rc: 1 Oct 23 01:42:32.408: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7351 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 23 01:42:42.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7351 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 01:42:42.565: INFO: rc: 1 Oct 23 01:42:42.565: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7351 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 23 01:42:52.568: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7351 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 01:42:52.723: INFO: rc: 1 Oct 23 01:42:52.723: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7351 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 23 01:43:02.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7351 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 01:43:02.876: INFO: rc: 1 Oct 23 01:43:02.876: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7351 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 23 01:43:12.877: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7351 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 01:43:13.039: INFO: rc: 1 Oct 23 01:43:13.039: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7351 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 23 01:43:23.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7351 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 01:43:23.193: INFO: rc: 1 Oct 23 01:43:23.193: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7351 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 23 01:43:33.194: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7351 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 01:43:33.341: INFO: rc: 1 Oct 23 01:43:33.341: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: Oct 23 01:43:33.341: INFO: Scaling statefulset ss to 0 Oct 23 01:43:33.361: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Oct 23 01:43:33.364: INFO: Deleting all statefulset in ns statefulset-7351 Oct 23 01:43:33.366: INFO: Scaling statefulset ss to 0 Oct 23 01:43:33.374: INFO: Waiting for statefulset status.replicas updated to 0 Oct 23 01:43:33.376: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:43:33.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7351" for this suite. • [SLOW TEST:361.953 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":-1,"completed":29,"skipped":432,"failed":0} Oct 23 01:43:33.403: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:38:42.393: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W1023 01:38:42.417750 30 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should not schedule jobs when suspended [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a suspended cronjob STEP: Ensuring no jobs are scheduled STEP: Ensuring no job exists by listing jobs explicitly STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:43:42.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-4662" for this suite. • [SLOW TEST:300.052 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should not schedule jobs when suspended [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","total":-1,"completed":25,"skipped":390,"failed":0} Oct 23 01:43:42.447: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:37:45.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-9781 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-9781 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9781 Oct 23 01:37:45.482: INFO: Found 0 stateful pods, waiting for 1 Oct 23 01:37:55.485: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Oct 23 01:37:55.488: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9781 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 23 01:37:55.752: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Oct 23 01:37:55.752: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 23 01:37:55.752: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 23 01:37:55.756: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Oct 23 01:38:05.761: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Oct 23 01:38:05.761: INFO: Waiting for statefulset status.replicas updated to 0 Oct 23 01:38:05.777: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999471s Oct 23 01:38:06.781: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.994094588s Oct 23 01:38:07.784: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.991097481s Oct 23 01:38:08.787: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.987845997s Oct 23 01:38:09.790: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.984835282s Oct 23 01:38:10.795: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.980862777s Oct 23 01:38:11.799: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.976679569s Oct 23 01:38:12.802: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.972958792s Oct 23 01:38:13.806: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.968902856s Oct 23 01:38:14.810: INFO: Verifying statefulset ss doesn't scale past 1 for another 966.073202ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9781 Oct 23 01:38:15.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9781 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 01:38:16.076: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Oct 23 01:38:16.076: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 23 01:38:16.076: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 23 01:38:16.080: INFO: Found 1 stateful pods, waiting for 3 Oct 23 01:38:26.085: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Oct 23 01:38:26.085: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Oct 23 01:38:26.085: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=false Oct 23 01:38:36.084: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Oct 23 01:38:36.084: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Oct 23 01:38:36.084: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Oct 23 01:38:36.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9781 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 23 01:38:36.379: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Oct 23 01:38:36.379: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 23 01:38:36.379: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 23 01:38:36.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9781 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 23 01:38:36.695: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Oct 23 01:38:36.695: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 23 01:38:36.695: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 23 01:38:36.695: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9781 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 23 01:38:37.432: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Oct 23 01:38:37.432: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 23 01:38:37.432: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 23 01:38:37.432: INFO: Waiting for statefulset status.replicas updated to 0 Oct 23 01:38:37.435: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Oct 23 01:38:47.441: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Oct 23 01:38:47.441: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Oct 23 01:38:47.441: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Oct 23 01:38:47.450: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999477s Oct 23 01:38:48.454: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.996056632s Oct 23 01:38:49.459: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.992440457s Oct 23 01:38:50.465: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.986019254s Oct 23 01:38:51.470: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.982383601s Oct 23 01:38:52.476: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.975140921s Oct 23 01:38:53.480: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.970771958s Oct 23 01:38:54.484: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.966648292s Oct 23 01:38:55.491: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.960107214s Oct 23 01:38:56.497: INFO: Verifying statefulset ss doesn't scale past 3 for another 954.078686ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9781 Oct 23 01:38:57.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9781 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 01:38:57.751: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Oct 23 01:38:57.752: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 23 01:38:57.752: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 23 01:38:57.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9781 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 01:38:58.012: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Oct 23 01:38:58.012: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 23 01:38:58.012: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 23 01:38:58.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9781 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 01:38:58.721: INFO: rc: 126 Oct 23 01:38:58.721: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9781 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown stderr: command terminated with exit code 126 error: exit status 126 Oct 23 01:39:08.723: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9781 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 01:39:08.880: INFO: rc: 1 Oct 23 01:39:08.880: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9781 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 23 01:39:18.880: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9781 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 01:39:19.022: INFO: rc: 1 Oct 23 01:39:19.022: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9781 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 23 01:39:29.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9781 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 01:39:29.171: INFO: rc: 1 Oct 23 01:39:29.171: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9781 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 23 01:39:39.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9781 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 01:39:39.316: INFO: rc: 1 Oct 23 01:39:39.316: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9781 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 23 01:39:49.317: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9781 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 01:39:49.470: INFO: rc: 1 Oct 23 01:39:49.470: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9781 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 23 01:39:59.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9781 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 01:39:59.630: INFO: rc: 1 Oct 23 01:39:59.630: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9781 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 23 01:40:09.630: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9781 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 01:40:09.803: INFO: rc: 1 Oct 23 01:40:09.806: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9781 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 23 01:40:19.807: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9781 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 01:40:19.953: INFO: rc: 1 Oct 23 01:40:19.953: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9781 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 23 01:40:29.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9781 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 01:40:30.098: INFO: rc: 1 Oct 23 01:40:30.098: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9781 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 23 01:40:40.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9781 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 01:40:40.245: INFO: rc: 1 Oct 23 01:40:40.245: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9781 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 23 01:40:50.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9781 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 01:40:50.381: INFO: rc: 1 Oct 23 01:40:50.381: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9781 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 23 01:41:00.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9781 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 01:41:00.515: INFO: rc: 1 Oct 23 01:41:00.515: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9781 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 23 01:41:10.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9781 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 01:41:10.650: INFO: rc: 1 Oct 23 01:41:10.650: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9781 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 23 01:41:20.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9781 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 01:41:20.798: INFO: rc: 1 Oct 23 01:41:20.798: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9781 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 23 01:41:30.799: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9781 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 01:41:30.962: INFO: rc: 1 Oct 23 01:41:30.962: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9781 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 23 01:41:40.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9781 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 01:41:41.141: INFO: rc: 1 Oct 23 01:41:41.141: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9781 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 23 01:41:51.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9781 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 01:41:51.297: INFO: rc: 1 Oct 23 01:41:51.297: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9781 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 23 01:42:01.299: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9781 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 01:42:01.436: INFO: rc: 1 Oct 23 01:42:01.436: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9781 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 23 01:42:11.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9781 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 01:42:11.596: INFO: rc: 1 Oct 23 01:42:11.596: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9781 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 23 01:42:21.599: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9781 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 01:42:21.732: INFO: rc: 1 Oct 23 01:42:21.732: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9781 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 23 01:42:31.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9781 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 01:42:31.878: INFO: rc: 1 Oct 23 01:42:31.878: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9781 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 23 01:42:41.880: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9781 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 01:42:42.027: INFO: rc: 1 Oct 23 01:42:42.027: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9781 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 23 01:42:52.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9781 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 01:42:52.186: INFO: rc: 1 Oct 23 01:42:52.186: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9781 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 23 01:43:02.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9781 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 01:43:02.350: INFO: rc: 1 Oct 23 01:43:02.350: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9781 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 23 01:43:12.353: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9781 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 01:43:12.502: INFO: rc: 1 Oct 23 01:43:12.502: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9781 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 23 01:43:22.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9781 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 01:43:22.655: INFO: rc: 1 Oct 23 01:43:22.656: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9781 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 23 01:43:32.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9781 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 01:43:32.800: INFO: rc: 1 Oct 23 01:43:32.800: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9781 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 23 01:43:42.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9781 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 01:43:42.959: INFO: rc: 1 Oct 23 01:43:42.959: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9781 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 23 01:43:52.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9781 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 01:43:53.109: INFO: rc: 1 Oct 23 01:43:53.110: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9781 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 23 01:44:03.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9781 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 01:44:03.268: INFO: rc: 1 Oct 23 01:44:03.268: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: Oct 23 01:44:03.268: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Oct 23 01:44:03.278: INFO: Deleting all statefulset in ns statefulset-9781 Oct 23 01:44:03.281: INFO: Scaling statefulset ss to 0 Oct 23 01:44:03.291: INFO: Waiting for statefulset status.replicas updated to 0 Oct 23 01:44:03.294: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:44:03.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9781" for this suite. • [SLOW TEST:377.870 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":-1,"completed":17,"skipped":265,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} Oct 23 01:44:03.320: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:39:55.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W1023 01:39:55.026790 37 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ForbidConcurrent cronjob STEP: Ensuring a job is scheduled STEP: Ensuring exactly one is scheduled STEP: Ensuring exactly one running job exists by listing jobs explicitly STEP: Ensuring no more jobs are scheduled STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:45:01.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-7222" for this suite. • [SLOW TEST:306.065 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","total":-1,"completed":41,"skipped":779,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} Oct 23 01:45:01.067: INFO: Running AfterSuite actions on all nodes {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":-1,"completed":34,"skipped":657,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} Oct 23 01:39:59.912: INFO: Running AfterSuite actions on all nodes Oct 23 01:45:01.147: INFO: Running AfterSuite actions on node 1 Oct 23 01:45:01.147: INFO: Skipping dumping logs from cluster Summarizing 7 Failures: [Fail] [sig-network] Services [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1351 [Fail] [sig-network] Services [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1312 [Fail] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] [It] Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 [Fail] [sig-network] Services [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2572 [Fail] [sig-network] Services [It] should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1169 [Fail] [sig-network] Services [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2572 [Fail] [sig-network] Services [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2493 Ran 320 of 5770 Specs in 1001.721 seconds FAIL! -- 313 Passed | 7 Failed | 0 Pending | 5450 Skipped Ginkgo ran 1 suite in 16m43.343106964s Test Suite Failed