Running Suite: Kubernetes e2e suite =================================== Random Seed: 1653084115 - Will randomize all specs Will run 5773 specs Running in parallel across 10 nodes May 20 22:01:57.438: INFO: >>> kubeConfig: /root/.kube/config May 20 22:01:57.440: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 20 22:01:57.469: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 20 22:01:57.537: INFO: The status of Pod cmk-init-discover-node1-vkzkd is Succeeded, skipping waiting May 20 22:01:57.537: INFO: The status of Pod cmk-init-discover-node2-b7gw4 is Succeeded, skipping waiting May 20 22:01:57.537: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 20 22:01:57.537: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. May 20 22:01:57.537: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 20 22:01:57.556: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) May 20 22:01:57.556: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) May 20 22:01:57.556: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) May 20 22:01:57.556: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) May 20 22:01:57.556: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) May 20 22:01:57.556: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) May 20 22:01:57.556: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) May 20 22:01:57.556: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 20 22:01:57.556: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) May 20 22:01:57.556: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) May 20 22:01:57.556: INFO: e2e test version: v1.21.9 May 20 22:01:57.557: INFO: kube-apiserver version: v1.21.1 May 20 22:01:57.558: INFO: >>> kubeConfig: /root/.kube/config May 20 22:01:57.565: INFO: Cluster IP family: ipv4 SSSSS ------------------------------ May 20 22:01:57.564: INFO: >>> kubeConfig: /root/.kube/config May 20 22:01:57.585: INFO: Cluster IP family: ipv4 S ------------------------------ May 20 22:01:57.569: INFO: >>> kubeConfig: /root/.kube/config May 20 22:01:57.589: INFO: Cluster IP family: ipv4 SSSSSSSSS ------------------------------ May 20 22:01:57.586: INFO: >>> kubeConfig: /root/.kube/config May 20 22:01:57.606: INFO: Cluster IP family: ipv4 SSSSSS ------------------------------ May 20 22:01:57.594: INFO: >>> kubeConfig: /root/.kube/config May 20 22:01:57.617: INFO: Cluster IP family: ipv4 S ------------------------------ May 20 22:01:57.598: INFO: >>> kubeConfig: /root/.kube/config May 20 22:01:57.619: INFO: Cluster IP family: ipv4 May 20 22:01:57.598: INFO: >>> kubeConfig: /root/.kube/config May 20 22:01:57.621: INFO: Cluster IP family: ipv4 SSSS ------------------------------ May 20 22:01:57.604: INFO: >>> kubeConfig: /root/.kube/config May 20 22:01:57.624: INFO: Cluster IP family: ipv4 SSSSS ------------------------------ May 20 22:01:57.607: INFO: >>> kubeConfig: /root/.kube/config May 20 22:01:57.629: INFO: Cluster IP family: ipv4 SSSSS ------------------------------ May 20 22:01:57.609: INFO: >>> kubeConfig: /root/.kube/config May 20 22:01:57.631: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:01:57.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables W0520 22:01:57.679205 33 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 20 22:01:57.679: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 20 22:01:57.681: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:01:57.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-6614" for this suite. •S ------------------------------ {"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":-1,"completed":1,"skipped":7,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:01:57.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets W0520 22:01:57.658108 38 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 20 22:01:57.659: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 20 22:01:57.663: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should be immutable if `immutable` field is set [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:01:57.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4863" for this suite. •SS ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":1,"skipped":12,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:01:57.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir W0520 22:01:57.703277 26 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 20 22:01:57.703: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 20 22:01:57.705: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0666 on tmpfs May 20 22:01:57.718: INFO: Waiting up to 5m0s for pod "pod-9a7dd3f5-eee2-45ea-9c1d-f80d4b2d76c9" in namespace "emptydir-3705" to be "Succeeded or Failed" May 20 22:01:57.721: INFO: Pod "pod-9a7dd3f5-eee2-45ea-9c1d-f80d4b2d76c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.32161ms May 20 22:01:59.723: INFO: Pod "pod-9a7dd3f5-eee2-45ea-9c1d-f80d4b2d76c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005154335s May 20 22:02:01.728: INFO: Pod "pod-9a7dd3f5-eee2-45ea-9c1d-f80d4b2d76c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010053727s STEP: Saw pod success May 20 22:02:01.728: INFO: Pod "pod-9a7dd3f5-eee2-45ea-9c1d-f80d4b2d76c9" satisfied condition "Succeeded or Failed" May 20 22:02:01.730: INFO: Trying to get logs from node node1 pod pod-9a7dd3f5-eee2-45ea-9c1d-f80d4b2d76c9 container test-container: STEP: delete the pod May 20 22:02:01.749: INFO: Waiting for pod pod-9a7dd3f5-eee2-45ea-9c1d-f80d4b2d76c9 to disappear May 20 22:02:01.750: INFO: Pod pod-9a7dd3f5-eee2-45ea-9c1d-f80d4b2d76c9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:02:01.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3705" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":18,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:01:57.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test W0520 22:01:57.613118 27 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 20 22:01:57.613: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 20 22:01:57.617: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 20 22:01:57.636: INFO: The status of Pod busybox-host-aliases702c61e6-d2db-4458-99a2-89e26ad315f9 is Pending, waiting for it to be Running (with Ready = true) May 20 22:01:59.639: INFO: The status of Pod busybox-host-aliases702c61e6-d2db-4458-99a2-89e26ad315f9 is Pending, waiting for it to be Running (with Ready = true) May 20 22:02:01.640: INFO: The status of Pod busybox-host-aliases702c61e6-d2db-4458-99a2-89e26ad315f9 is Pending, waiting for it to be Running (with Ready = true) May 20 22:02:03.640: INFO: The status of Pod busybox-host-aliases702c61e6-d2db-4458-99a2-89e26ad315f9 is Running (Ready = true) [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:02:03.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1387" for this suite. • [SLOW TEST:6.085 seconds] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when scheduling a busybox Pod with hostAliases /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:137 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":5,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:01:57.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir W0520 22:01:57.674967 30 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 20 22:01:57.675: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 20 22:01:57.677: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0777 on tmpfs May 20 22:01:57.690: INFO: Waiting up to 5m0s for pod "pod-15b3251a-f2b3-44f5-9f6d-76f4182beb32" in namespace "emptydir-7004" to be "Succeeded or Failed" May 20 22:01:57.693: INFO: Pod "pod-15b3251a-f2b3-44f5-9f6d-76f4182beb32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.536037ms May 20 22:01:59.699: INFO: Pod "pod-15b3251a-f2b3-44f5-9f6d-76f4182beb32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008443372s May 20 22:02:01.702: INFO: Pod "pod-15b3251a-f2b3-44f5-9f6d-76f4182beb32": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011582256s May 20 22:02:03.704: INFO: Pod "pod-15b3251a-f2b3-44f5-9f6d-76f4182beb32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013864427s STEP: Saw pod success May 20 22:02:03.704: INFO: Pod "pod-15b3251a-f2b3-44f5-9f6d-76f4182beb32" satisfied condition "Succeeded or Failed" May 20 22:02:03.706: INFO: Trying to get logs from node node1 pod pod-15b3251a-f2b3-44f5-9f6d-76f4182beb32 container test-container: STEP: delete the pod May 20 22:02:03.728: INFO: Waiting for pod pod-15b3251a-f2b3-44f5-9f6d-76f4182beb32 to disappear May 20 22:02:03.730: INFO: Pod pod-15b3251a-f2b3-44f5-9f6d-76f4182beb32 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:02:03.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7004" for this suite. • [SLOW TEST:6.097 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:01:57.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir W0520 22:01:57.689255 28 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 20 22:01:57.689: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 20 22:01:57.691: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0666 on node default medium May 20 22:01:57.705: INFO: Waiting up to 5m0s for pod "pod-5eb170eb-f3ed-44fb-aa84-9eb41f8483dc" in namespace "emptydir-726" to be "Succeeded or Failed" May 20 22:01:57.707: INFO: Pod "pod-5eb170eb-f3ed-44fb-aa84-9eb41f8483dc": Phase="Pending", Reason="", readiness=false. Elapsed: 1.933207ms May 20 22:01:59.710: INFO: Pod "pod-5eb170eb-f3ed-44fb-aa84-9eb41f8483dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005551351s May 20 22:02:01.713: INFO: Pod "pod-5eb170eb-f3ed-44fb-aa84-9eb41f8483dc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008678685s May 20 22:02:03.717: INFO: Pod "pod-5eb170eb-f3ed-44fb-aa84-9eb41f8483dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.012028934s STEP: Saw pod success May 20 22:02:03.717: INFO: Pod "pod-5eb170eb-f3ed-44fb-aa84-9eb41f8483dc" satisfied condition "Succeeded or Failed" May 20 22:02:03.719: INFO: Trying to get logs from node node2 pod pod-5eb170eb-f3ed-44fb-aa84-9eb41f8483dc container test-container: STEP: delete the pod May 20 22:02:03.730: INFO: Waiting for pod pod-5eb170eb-f3ed-44fb-aa84-9eb41f8483dc to disappear May 20 22:02:03.732: INFO: Pod pod-5eb170eb-f3ed-44fb-aa84-9eb41f8483dc no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:02:03.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-726" for this suite. • [SLOW TEST:6.083 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":12,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:02:03.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should support proxy with --port 0 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: starting the proxy server May 20 22:02:03.696: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6261 proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:02:03.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6261" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":-1,"completed":2,"skipped":7,"failed":0} SSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:01:57.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should create a PodDisruptionBudget [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pdb STEP: Waiting for the pdb to be processed STEP: updating the pdb STEP: Waiting for the pdb to be processed STEP: patching the pdb STEP: Waiting for the pdb to be processed STEP: Waiting for the pdb to be deleted [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:02:04.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-7168" for this suite. • [SLOW TEST:6.334 seconds] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should create a PodDisruptionBudget [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]","total":-1,"completed":2,"skipped":21,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:01:57.657: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts W0520 22:01:57.684020 35 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 20 22:01:57.684: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 20 22:01:57.685: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting the auto-created API token STEP: reading a file in the container May 20 22:02:08.214: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8378 pod-service-account-c2236932-5828-4630-bc56-082191dee1a4 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container May 20 22:02:08.472: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8378 pod-service-account-c2236932-5828-4630-bc56-082191dee1a4 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container May 20 22:02:08.723: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8378 pod-service-account-c2236932-5828-4630-bc56-082191dee1a4 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:02:08.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-8378" for this suite. • [SLOW TEST:11.298 seconds] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":-1,"completed":1,"skipped":15,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:01:57.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi W0520 22:01:57.642882 24 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 20 22:01:57.643: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 20 22:01:57.645: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 20 22:01:57.648: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 20 22:02:06.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-457 --namespace=crd-publish-openapi-457 create -f -' May 20 22:02:06.698: INFO: stderr: "" May 20 22:02:06.698: INFO: stdout: "e2e-test-crd-publish-openapi-200-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 20 22:02:06.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-457 --namespace=crd-publish-openapi-457 delete e2e-test-crd-publish-openapi-200-crds test-cr' May 20 22:02:06.854: INFO: stderr: "" May 20 22:02:06.854: INFO: stdout: "e2e-test-crd-publish-openapi-200-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" May 20 22:02:06.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-457 --namespace=crd-publish-openapi-457 apply -f -' May 20 22:02:07.184: INFO: stderr: "" May 20 22:02:07.184: INFO: stdout: "e2e-test-crd-publish-openapi-200-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 20 22:02:07.184: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-457 --namespace=crd-publish-openapi-457 delete e2e-test-crd-publish-openapi-200-crds test-cr' May 20 22:02:07.347: INFO: stderr: "" May 20 22:02:07.347: INFO: stdout: "e2e-test-crd-publish-openapi-200-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 20 22:02:07.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-457 explain e2e-test-crd-publish-openapi-200-crds' May 20 22:02:07.672: INFO: stderr: "" May 20 22:02:07.672: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-200-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:02:11.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-457" for this suite. • [SLOW TEST:13.802 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:02:09.046: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 20 22:02:09.080: INFO: Waiting up to 5m0s for pod "busybox-user-65534-17713dbd-aa26-4af0-8570-225281c230a0" in namespace "security-context-test-7020" to be "Succeeded or Failed" May 20 22:02:09.081: INFO: Pod "busybox-user-65534-17713dbd-aa26-4af0-8570-225281c230a0": Phase="Pending", Reason="", readiness=false. Elapsed: 1.803542ms May 20 22:02:11.085: INFO: Pod "busybox-user-65534-17713dbd-aa26-4af0-8570-225281c230a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005512895s May 20 22:02:13.089: INFO: Pod "busybox-user-65534-17713dbd-aa26-4af0-8570-225281c230a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009594474s May 20 22:02:13.089: INFO: Pod "busybox-user-65534-17713dbd-aa26-4af0-8570-225281c230a0" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:02:13.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7020" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":58,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:02:03.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 20 22:02:03.773: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5684 create -f -' May 20 22:02:04.110: INFO: stderr: "" May 20 22:02:04.110: INFO: stdout: "replicationcontroller/agnhost-primary created\n" May 20 22:02:04.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5684 create -f -' May 20 22:02:04.480: INFO: stderr: "" May 20 22:02:04.480: INFO: stdout: "service/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. May 20 22:02:05.484: INFO: Selector matched 1 pods for map[app:agnhost] May 20 22:02:05.484: INFO: Found 0 / 1 May 20 22:02:06.484: INFO: Selector matched 1 pods for map[app:agnhost] May 20 22:02:06.484: INFO: Found 0 / 1 May 20 22:02:07.483: INFO: Selector matched 1 pods for map[app:agnhost] May 20 22:02:07.483: INFO: Found 0 / 1 May 20 22:02:08.484: INFO: Selector matched 1 pods for map[app:agnhost] May 20 22:02:08.484: INFO: Found 0 / 1 May 20 22:02:09.484: INFO: Selector matched 1 pods for map[app:agnhost] May 20 22:02:09.485: INFO: Found 0 / 1 May 20 22:02:10.484: INFO: Selector matched 1 pods for map[app:agnhost] May 20 22:02:10.484: INFO: Found 0 / 1 May 20 22:02:11.484: INFO: Selector matched 1 pods for map[app:agnhost] May 20 22:02:11.484: INFO: Found 0 / 1 May 20 22:02:12.484: INFO: Selector matched 1 pods for map[app:agnhost] May 20 22:02:12.484: INFO: Found 1 / 1 May 20 22:02:12.484: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 20 22:02:12.486: INFO: Selector matched 1 pods for map[app:agnhost] May 20 22:02:12.486: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 20 22:02:12.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5684 describe pod agnhost-primary-fwzf8' May 20 22:02:12.660: INFO: stderr: "" May 20 22:02:12.660: INFO: stdout: "Name: agnhost-primary-fwzf8\nNamespace: kubectl-5684\nPriority: 0\nNode: node2/10.10.190.208\nStart Time: Fri, 20 May 2022 22:02:04 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.194\"\n ],\n \"mac\": \"1e:00:7f:4f:55:da\",\n \"default\": true,\n \"dns\": {}\n }]\n k8s.v1.cni.cncf.io/networks-status:\n [{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.194\"\n ],\n \"mac\": \"1e:00:7f:4f:55:da\",\n \"default\": true,\n \"dns\": {}\n }]\n kubernetes.io/psp: collectd\nStatus: Running\nIP: 10.244.3.194\nIPs:\n IP: 10.244.3.194\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: docker://66ab515e39c116f37414f31ca95b8b679e603c3bfd7838ce407dcae0cba138e0\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.32\n Image ID: docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Fri, 20 May 2022 22:02:11 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dqtj6 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-dqtj6:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 8s default-scheduler Successfully assigned kubectl-5684/agnhost-primary-fwzf8 to node2\n Normal Pulling 4s kubelet Pulling image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\"\n Normal Pulled 1s kubelet Successfully pulled image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\" in 2.868416786s\n Normal Created 1s kubelet Created container agnhost-primary\n Normal Started 1s kubelet Started container agnhost-primary\n" May 20 22:02:12.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5684 describe rc agnhost-primary' May 20 22:02:12.842: INFO: stderr: "" May 20 22:02:12.842: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-5684\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.32\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 8s replication-controller Created pod: agnhost-primary-fwzf8\n" May 20 22:02:12.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5684 describe service agnhost-primary' May 20 22:02:13.023: INFO: stderr: "" May 20 22:02:13.023: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-5684\nLabels: app=agnhost\n role=primary\nAnnotations: \nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.233.55.253\nIPs: 10.233.55.253\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.3.194:6379\nSession Affinity: None\nEvents: \n" May 20 22:02:13.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5684 describe node master1' May 20 22:02:13.241: INFO: stderr: "" May 20 22:02:13.241: INFO: stdout: "Name: master1\nRoles: control-plane,master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=master1\n kubernetes.io/os=linux\n node-role.kubernetes.io/control-plane=\n node-role.kubernetes.io/master=\n node.kubernetes.io/exclude-from-external-load-balancers=\nAnnotations: flannel.alpha.coreos.com/backend-data: null\n flannel.alpha.coreos.com/backend-type: host-gw\n flannel.alpha.coreos.com/kube-subnet-manager: true\n flannel.alpha.coreos.com/public-ip: 10.10.190.202\n kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n nfd.node.kubernetes.io/master.version: v0.8.2\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Fri, 20 May 2022 20:01:28 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: master1\n AcquireTime: \n RenewTime: Fri, 20 May 2022 22:02:11 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n NetworkUnavailable False Fri, 20 May 2022 20:07:07 +0000 Fri, 20 May 2022 20:07:07 +0000 FlannelIsUp Flannel is running on this node\n MemoryPressure False Fri, 20 May 2022 22:02:03 +0000 Fri, 20 May 2022 20:01:26 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Fri, 20 May 2022 22:02:03 +0000 Fri, 20 May 2022 20:01:26 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Fri, 20 May 2022 22:02:03 +0000 Fri, 20 May 2022 20:01:26 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Fri, 20 May 2022 22:02:03 +0000 Fri, 20 May 2022 20:04:22 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 10.10.190.202\n Hostname: master1\nCapacity:\n cpu: 80\n ephemeral-storage: 440625980Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 196518304Ki\n pods: 110\nAllocatable:\n cpu: 79550m\n ephemeral-storage: 406080902496\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 195629472Ki\n pods: 110\nSystem Info:\n Machine ID: e9847a94929d4465bdf672fd6e82b77d\n System UUID: 00ACFB60-0631-E711-906E-0017A4403562\n Boot ID: a01e5bd5-a73c-4ab6-b80a-cab509b05bc6\n Kernel Version: 3.10.0-1160.66.1.el7.x86_64\n OS Image: CentOS Linux 7 (Core)\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: docker://20.10.16\n Kubelet Version: v1.21.1\n Kube-Proxy Version: v1.21.1\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (10 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system container-registry-65d7c44b96-n94w5 0 (0%) 0 (0%) 0 (0%) 0 (0%) 113m\n kube-system kube-apiserver-master1 250m (0%) 0 (0%) 0 (0%) 0 (0%) 111m\n kube-system kube-controller-manager-master1 200m (0%) 0 (0%) 0 (0%) 0 (0%) 119m\n kube-system kube-flannel-tzq8g 150m (0%) 300m (0%) 64M (0%) 500M (0%) 118m\n kube-system kube-multus-ds-amd64-k8cb6 100m (0%) 100m (0%) 90Mi (0%) 90Mi (0%) 117m\n kube-system kube-proxy-rgxh2 0 (0%) 0 (0%) 0 (0%) 0 (0%) 118m\n kube-system kube-scheduler-master1 100m (0%) 0 (0%) 0 (0%) 0 (0%) 101m\n kube-system node-feature-discovery-controller-cff799f9f-nq7tc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 110m\n monitoring node-exporter-4rvrg 112m (0%) 270m (0%) 200Mi (0%) 220Mi (0%) 104m\n monitoring prometheus-operator-585ccfb458-bl62n 100m (0%) 200m (0%) 100Mi (0%) 200Mi (0%) 105m\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 1012m (1%) 870m (1%)\n memory 472944640 (0%) 1034773760 (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" May 20 22:02:13.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5684 describe namespace kubectl-5684' May 20 22:02:13.445: INFO: stderr: "" May 20 22:02:13.445: INFO: stdout: "Name: kubectl-5684\nLabels: e2e-framework=kubectl\n e2e-run=a6f91433-ae82-4ffb-bda6-53b8ed556865\n kubernetes.io/metadata.name=kubectl-5684\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:02:13.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5684" for this suite. • [SLOW TEST:9.703 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1084 should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":-1,"completed":2,"skipped":15,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:02:04.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-map-098a332a-5dde-4149-a127-d61fde48eb78 STEP: Creating a pod to test consume secrets May 20 22:02:04.135: INFO: Waiting up to 5m0s for pod "pod-secrets-66e7d93b-6200-4fea-8ee9-987aff647bff" in namespace "secrets-759" to be "Succeeded or Failed" May 20 22:02:04.140: INFO: Pod "pod-secrets-66e7d93b-6200-4fea-8ee9-987aff647bff": Phase="Pending", Reason="", readiness=false. Elapsed: 4.383848ms May 20 22:02:06.145: INFO: Pod "pod-secrets-66e7d93b-6200-4fea-8ee9-987aff647bff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009078539s May 20 22:02:08.149: INFO: Pod "pod-secrets-66e7d93b-6200-4fea-8ee9-987aff647bff": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013072516s May 20 22:02:10.153: INFO: Pod "pod-secrets-66e7d93b-6200-4fea-8ee9-987aff647bff": Phase="Pending", Reason="", readiness=false. Elapsed: 6.017083916s May 20 22:02:12.158: INFO: Pod "pod-secrets-66e7d93b-6200-4fea-8ee9-987aff647bff": Phase="Pending", Reason="", readiness=false. Elapsed: 8.022637793s May 20 22:02:14.162: INFO: Pod "pod-secrets-66e7d93b-6200-4fea-8ee9-987aff647bff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.026525235s STEP: Saw pod success May 20 22:02:14.162: INFO: Pod "pod-secrets-66e7d93b-6200-4fea-8ee9-987aff647bff" satisfied condition "Succeeded or Failed" May 20 22:02:14.165: INFO: Trying to get logs from node node2 pod pod-secrets-66e7d93b-6200-4fea-8ee9-987aff647bff container secret-volume-test: STEP: delete the pod May 20 22:02:14.176: INFO: Waiting for pod pod-secrets-66e7d93b-6200-4fea-8ee9-987aff647bff to disappear May 20 22:02:14.177: INFO: Pod pod-secrets-66e7d93b-6200-4fea-8ee9-987aff647bff no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:02:14.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-759" for this suite. • [SLOW TEST:10.100 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":34,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:02:11.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-5ddfaa68-2194-4fe3-abaa-d60264bd175e STEP: Creating a pod to test consume secrets May 20 22:02:11.500: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d517558f-fe06-48db-a88d-1d6c188b1df1" in namespace "projected-2974" to be "Succeeded or Failed" May 20 22:02:11.503: INFO: Pod "pod-projected-secrets-d517558f-fe06-48db-a88d-1d6c188b1df1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.494846ms May 20 22:02:13.507: INFO: Pod "pod-projected-secrets-d517558f-fe06-48db-a88d-1d6c188b1df1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007545747s May 20 22:02:15.511: INFO: Pod "pod-projected-secrets-d517558f-fe06-48db-a88d-1d6c188b1df1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011656901s STEP: Saw pod success May 20 22:02:15.511: INFO: Pod "pod-projected-secrets-d517558f-fe06-48db-a88d-1d6c188b1df1" satisfied condition "Succeeded or Failed" May 20 22:02:15.513: INFO: Trying to get logs from node node1 pod pod-projected-secrets-d517558f-fe06-48db-a88d-1d6c188b1df1 container projected-secret-volume-test: STEP: delete the pod May 20 22:02:15.529: INFO: Waiting for pod pod-projected-secrets-d517558f-fe06-48db-a88d-1d6c188b1df1 to disappear May 20 22:02:15.531: INFO: Pod pod-projected-secrets-d517558f-fe06-48db-a88d-1d6c188b1df1 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:02:15.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2974" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":21,"failed":0} SS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:02:15.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create deployment with httpd image May 20 22:02:15.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9786 create -f -' May 20 22:02:15.942: INFO: stderr: "" May 20 22:02:15.942: INFO: stdout: "deployment.apps/httpd-deployment created\n" STEP: verify diff finds difference between live and declared image May 20 22:02:15.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9786 diff -f -' May 20 22:02:16.254: INFO: rc: 1 May 20 22:02:16.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9786 delete -f -' May 20 22:02:16.387: INFO: stderr: "" May 20 22:02:16.387: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:02:16.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9786" for this suite. • ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:02:13.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars May 20 22:02:13.232: INFO: Waiting up to 5m0s for pod "downward-api-7fc820b0-d0ae-40cc-9016-e789baccfbf5" in namespace "downward-api-8260" to be "Succeeded or Failed" May 20 22:02:13.235: INFO: Pod "downward-api-7fc820b0-d0ae-40cc-9016-e789baccfbf5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.270265ms May 20 22:02:15.239: INFO: Pod "downward-api-7fc820b0-d0ae-40cc-9016-e789baccfbf5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006554526s May 20 22:02:17.243: INFO: Pod "downward-api-7fc820b0-d0ae-40cc-9016-e789baccfbf5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010553981s STEP: Saw pod success May 20 22:02:17.243: INFO: Pod "downward-api-7fc820b0-d0ae-40cc-9016-e789baccfbf5" satisfied condition "Succeeded or Failed" May 20 22:02:17.246: INFO: Trying to get logs from node node1 pod downward-api-7fc820b0-d0ae-40cc-9016-e789baccfbf5 container dapi-container: STEP: delete the pod May 20 22:02:17.260: INFO: Waiting for pod downward-api-7fc820b0-d0ae-40cc-9016-e789baccfbf5 to disappear May 20 22:02:17.262: INFO: Pod downward-api-7fc820b0-d0ae-40cc-9016-e789baccfbf5 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:02:17.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8260" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":106,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:02:13.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should add annotations for pods in rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating Agnhost RC May 20 22:02:13.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4573 create -f -' May 20 22:02:13.931: INFO: stderr: "" May 20 22:02:13.931: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. May 20 22:02:14.934: INFO: Selector matched 1 pods for map[app:agnhost] May 20 22:02:14.935: INFO: Found 0 / 1 May 20 22:02:15.936: INFO: Selector matched 1 pods for map[app:agnhost] May 20 22:02:15.936: INFO: Found 0 / 1 May 20 22:02:16.935: INFO: Selector matched 1 pods for map[app:agnhost] May 20 22:02:16.935: INFO: Found 0 / 1 May 20 22:02:17.937: INFO: Selector matched 1 pods for map[app:agnhost] May 20 22:02:17.937: INFO: Found 0 / 1 May 20 22:02:18.935: INFO: Selector matched 1 pods for map[app:agnhost] May 20 22:02:18.935: INFO: Found 1 / 1 May 20 22:02:18.935: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 20 22:02:18.938: INFO: Selector matched 1 pods for map[app:agnhost] May 20 22:02:18.938: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 20 22:02:18.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4573 patch pod agnhost-primary-8k826 -p {"metadata":{"annotations":{"x":"y"}}}' May 20 22:02:19.105: INFO: stderr: "" May 20 22:02:19.105: INFO: stdout: "pod/agnhost-primary-8k826 patched\n" STEP: checking annotations May 20 22:02:19.109: INFO: Selector matched 1 pods for map[app:agnhost] May 20 22:02:19.109: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:02:19.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4573" for this suite. • [SLOW TEST:5.594 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1460 should add annotations for pods in rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":-1,"completed":3,"skipped":48,"failed":0} SSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:01:57.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi W0520 22:01:57.625127 36 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 20 22:01:57.625: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 20 22:01:57.627: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: set up a multi version CRD May 20 22:01:57.630: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:02:21.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9925" for this suite. • [SLOW TEST:23.940 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":-1,"completed":1,"skipped":4,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:02:21.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should support creating EndpointSlice API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/discovery.k8s.io STEP: getting /apis/discovery.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching May 20 22:02:21.600: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching May 20 22:02:21.604: INFO: starting watch STEP: patching STEP: updating May 20 22:02:21.614: INFO: waiting for watch events with expected annotations May 20 22:02:21.614: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:02:21.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-589" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","total":-1,"completed":2,"skipped":13,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:02:21.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a test event STEP: listing all events in all namespaces STEP: patching the test event STEP: fetching the test event STEP: deleting the test event STEP: listing all events in all namespaces [AfterEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:02:21.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-8018" for this suite. • ------------------------------ {"msg":"PASSED [sig-instrumentation] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":3,"skipped":27,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:02:14.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Creating a NodePort Service STEP: Not allowing a LoadBalancer Service with NodePort to be created that exceeds remaining quota STEP: Ensuring resource quota status captures service creation STEP: Deleting Services STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:02:25.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8395" for this suite. • [SLOW TEST:11.103 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":-1,"completed":4,"skipped":56,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:02:21.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir volume type on tmpfs May 20 22:02:21.871: INFO: Waiting up to 5m0s for pod "pod-d2dd35cd-7775-47ec-b801-714c71aec3ca" in namespace "emptydir-1982" to be "Succeeded or Failed" May 20 22:02:21.873: INFO: Pod "pod-d2dd35cd-7775-47ec-b801-714c71aec3ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.112716ms May 20 22:02:23.877: INFO: Pod "pod-d2dd35cd-7775-47ec-b801-714c71aec3ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005992904s May 20 22:02:25.880: INFO: Pod "pod-d2dd35cd-7775-47ec-b801-714c71aec3ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00922213s STEP: Saw pod success May 20 22:02:25.880: INFO: Pod "pod-d2dd35cd-7775-47ec-b801-714c71aec3ca" satisfied condition "Succeeded or Failed" May 20 22:02:25.882: INFO: Trying to get logs from node node2 pod pod-d2dd35cd-7775-47ec-b801-714c71aec3ca container test-container: STEP: delete the pod May 20 22:02:25.893: INFO: Waiting for pod pod-d2dd35cd-7775-47ec-b801-714c71aec3ca to disappear May 20 22:02:25.895: INFO: Pod pod-d2dd35cd-7775-47ec-b801-714c71aec3ca no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:02:25.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1982" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":73,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:02:01.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-6984 STEP: creating service affinity-clusterip-transition in namespace services-6984 STEP: creating replication controller affinity-clusterip-transition in namespace services-6984 I0520 22:02:01.827905 26 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-6984, replica count: 3 I0520 22:02:04.879580 26 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0520 22:02:07.880552 26 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0520 22:02:10.881048 26 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 20 22:02:10.887: INFO: Creating new exec pod May 20 22:02:15.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6984 exec execpod-affinity5jlq7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' May 20 22:02:16.185: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" May 20 22:02:16.185: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" May 20 22:02:16.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6984 exec execpod-affinity5jlq7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.57.46 80' May 20 22:02:16.433: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.57.46 80\nConnection to 10.233.57.46 80 port [tcp/http] succeeded!\n" May 20 22:02:16.434: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" May 20 22:02:16.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6984 exec execpod-affinity5jlq7 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.233.57.46:80/ ; done' May 20 22:02:16.764: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.57.46:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.57.46:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.57.46:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.57.46:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.57.46:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.57.46:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.57.46:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.57.46:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.57.46:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.57.46:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.57.46:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.57.46:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.57.46:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.57.46:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.57.46:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.57.46:80/\n" May 20 22:02:16.764: INFO: stdout: "\naffinity-clusterip-transition-wv2mc\naffinity-clusterip-transition-wv2mc\naffinity-clusterip-transition-wv2mc\naffinity-clusterip-transition-wv2mc\naffinity-clusterip-transition-wv2mc\naffinity-clusterip-transition-wv2mc\naffinity-clusterip-transition-ts5s7\naffinity-clusterip-transition-j2wlz\naffinity-clusterip-transition-wv2mc\naffinity-clusterip-transition-ts5s7\naffinity-clusterip-transition-j2wlz\naffinity-clusterip-transition-j2wlz\naffinity-clusterip-transition-ts5s7\naffinity-clusterip-transition-ts5s7\naffinity-clusterip-transition-ts5s7\naffinity-clusterip-transition-wv2mc" May 20 22:02:16.764: INFO: Received response from host: affinity-clusterip-transition-wv2mc May 20 22:02:16.764: INFO: Received response from host: affinity-clusterip-transition-wv2mc May 20 22:02:16.764: INFO: Received response from host: affinity-clusterip-transition-wv2mc May 20 22:02:16.764: INFO: Received response from host: affinity-clusterip-transition-wv2mc May 20 22:02:16.764: INFO: Received response from host: affinity-clusterip-transition-wv2mc May 20 22:02:16.764: INFO: Received response from host: affinity-clusterip-transition-wv2mc May 20 22:02:16.764: INFO: Received response from host: affinity-clusterip-transition-ts5s7 May 20 22:02:16.764: INFO: Received response from host: affinity-clusterip-transition-j2wlz May 20 22:02:16.764: INFO: Received response from host: affinity-clusterip-transition-wv2mc May 20 22:02:16.764: INFO: Received response from host: affinity-clusterip-transition-ts5s7 May 20 22:02:16.764: INFO: Received response from host: affinity-clusterip-transition-j2wlz May 20 22:02:16.764: INFO: Received response from host: affinity-clusterip-transition-j2wlz May 20 22:02:16.764: INFO: Received response from host: affinity-clusterip-transition-ts5s7 May 20 22:02:16.764: INFO: Received response from host: affinity-clusterip-transition-ts5s7 May 20 22:02:16.764: INFO: Received response from host: affinity-clusterip-transition-ts5s7 May 20 22:02:16.764: INFO: Received response from host: affinity-clusterip-transition-wv2mc May 20 22:02:16.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6984 exec execpod-affinity5jlq7 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.233.57.46:80/ ; done' May 20 22:02:17.250: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.57.46:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.57.46:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.57.46:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.57.46:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.57.46:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.57.46:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.57.46:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.57.46:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.57.46:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.57.46:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.57.46:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.57.46:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.57.46:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.57.46:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.57.46:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.57.46:80/\n" May 20 22:02:17.250: INFO: stdout: "\naffinity-clusterip-transition-j2wlz\naffinity-clusterip-transition-j2wlz\naffinity-clusterip-transition-j2wlz\naffinity-clusterip-transition-j2wlz\naffinity-clusterip-transition-j2wlz\naffinity-clusterip-transition-j2wlz\naffinity-clusterip-transition-j2wlz\naffinity-clusterip-transition-j2wlz\naffinity-clusterip-transition-j2wlz\naffinity-clusterip-transition-j2wlz\naffinity-clusterip-transition-j2wlz\naffinity-clusterip-transition-j2wlz\naffinity-clusterip-transition-j2wlz\naffinity-clusterip-transition-j2wlz\naffinity-clusterip-transition-j2wlz\naffinity-clusterip-transition-j2wlz" May 20 22:02:17.250: INFO: Received response from host: affinity-clusterip-transition-j2wlz May 20 22:02:17.250: INFO: Received response from host: affinity-clusterip-transition-j2wlz May 20 22:02:17.250: INFO: Received response from host: affinity-clusterip-transition-j2wlz May 20 22:02:17.250: INFO: Received response from host: affinity-clusterip-transition-j2wlz May 20 22:02:17.250: INFO: Received response from host: affinity-clusterip-transition-j2wlz May 20 22:02:17.250: INFO: Received response from host: affinity-clusterip-transition-j2wlz May 20 22:02:17.250: INFO: Received response from host: affinity-clusterip-transition-j2wlz May 20 22:02:17.250: INFO: Received response from host: affinity-clusterip-transition-j2wlz May 20 22:02:17.250: INFO: Received response from host: affinity-clusterip-transition-j2wlz May 20 22:02:17.250: INFO: Received response from host: affinity-clusterip-transition-j2wlz May 20 22:02:17.250: INFO: Received response from host: affinity-clusterip-transition-j2wlz May 20 22:02:17.250: INFO: Received response from host: affinity-clusterip-transition-j2wlz May 20 22:02:17.250: INFO: Received response from host: affinity-clusterip-transition-j2wlz May 20 22:02:17.250: INFO: Received response from host: affinity-clusterip-transition-j2wlz May 20 22:02:17.250: INFO: Received response from host: affinity-clusterip-transition-j2wlz May 20 22:02:17.250: INFO: Received response from host: affinity-clusterip-transition-j2wlz May 20 22:02:17.250: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-6984, will wait for the garbage collector to delete the pods May 20 22:02:17.314: INFO: Deleting ReplicationController affinity-clusterip-transition took: 4.190144ms May 20 22:02:17.415: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 101.241059ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:02:25.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6984" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:24.138 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":2,"skipped":33,"failed":0} SSS ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":-1,"completed":3,"skipped":23,"failed":0} [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:02:16.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:02:27.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-378" for this suite. • [SLOW TEST:11.066 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":-1,"completed":4,"skipped":23,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:01:57.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl W0520 22:01:58.055835 23 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 20 22:01:58.056: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 20 22:01:58.058: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:293 [It] should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a replication controller May 20 22:01:58.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2442 create -f -' May 20 22:01:58.504: INFO: stderr: "" May 20 22:01:58.504: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 20 22:01:58.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2442 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' May 20 22:01:58.674: INFO: stderr: "" May 20 22:01:58.674: INFO: stdout: "update-demo-nautilus-4v9hj update-demo-nautilus-nltlh " May 20 22:01:58.674: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2442 get pods update-demo-nautilus-4v9hj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 20 22:01:58.861: INFO: stderr: "" May 20 22:01:58.861: INFO: stdout: "" May 20 22:01:58.861: INFO: update-demo-nautilus-4v9hj is created but not running May 20 22:02:03.862: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2442 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' May 20 22:02:04.046: INFO: stderr: "" May 20 22:02:04.046: INFO: stdout: "update-demo-nautilus-4v9hj update-demo-nautilus-nltlh " May 20 22:02:04.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2442 get pods update-demo-nautilus-4v9hj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 20 22:02:04.203: INFO: stderr: "" May 20 22:02:04.203: INFO: stdout: "" May 20 22:02:04.203: INFO: update-demo-nautilus-4v9hj is created but not running May 20 22:02:09.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2442 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' May 20 22:02:09.356: INFO: stderr: "" May 20 22:02:09.356: INFO: stdout: "update-demo-nautilus-4v9hj update-demo-nautilus-nltlh " May 20 22:02:09.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2442 get pods update-demo-nautilus-4v9hj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 20 22:02:09.514: INFO: stderr: "" May 20 22:02:09.514: INFO: stdout: "true" May 20 22:02:09.514: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2442 get pods update-demo-nautilus-4v9hj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' May 20 22:02:09.661: INFO: stderr: "" May 20 22:02:09.661: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" May 20 22:02:09.661: INFO: validating pod update-demo-nautilus-4v9hj May 20 22:02:09.665: INFO: got data: { "image": "nautilus.jpg" } May 20 22:02:09.665: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 20 22:02:09.665: INFO: update-demo-nautilus-4v9hj is verified up and running May 20 22:02:09.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2442 get pods update-demo-nautilus-nltlh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 20 22:02:09.830: INFO: stderr: "" May 20 22:02:09.830: INFO: stdout: "true" May 20 22:02:09.830: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2442 get pods update-demo-nautilus-nltlh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' May 20 22:02:09.995: INFO: stderr: "" May 20 22:02:09.995: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" May 20 22:02:09.996: INFO: validating pod update-demo-nautilus-nltlh May 20 22:02:09.999: INFO: got data: { "image": "nautilus.jpg" } May 20 22:02:09.999: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 20 22:02:09.999: INFO: update-demo-nautilus-nltlh is verified up and running STEP: scaling down the replication controller May 20 22:02:10.008: INFO: scanned /root for discovery docs: May 20 22:02:10.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2442 scale rc update-demo-nautilus --replicas=1 --timeout=5m' May 20 22:02:10.232: INFO: stderr: "" May 20 22:02:10.232: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 20 22:02:10.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2442 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' May 20 22:02:10.407: INFO: stderr: "" May 20 22:02:10.407: INFO: stdout: "update-demo-nautilus-4v9hj update-demo-nautilus-nltlh " STEP: Replicas for name=update-demo: expected=1 actual=2 May 20 22:02:15.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2442 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' May 20 22:02:15.577: INFO: stderr: "" May 20 22:02:15.577: INFO: stdout: "update-demo-nautilus-4v9hj " May 20 22:02:15.577: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2442 get pods update-demo-nautilus-4v9hj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 20 22:02:15.718: INFO: stderr: "" May 20 22:02:15.718: INFO: stdout: "true" May 20 22:02:15.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2442 get pods update-demo-nautilus-4v9hj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' May 20 22:02:15.881: INFO: stderr: "" May 20 22:02:15.881: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" May 20 22:02:15.881: INFO: validating pod update-demo-nautilus-4v9hj May 20 22:02:15.883: INFO: got data: { "image": "nautilus.jpg" } May 20 22:02:15.883: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 20 22:02:15.883: INFO: update-demo-nautilus-4v9hj is verified up and running STEP: scaling up the replication controller May 20 22:02:15.893: INFO: scanned /root for discovery docs: May 20 22:02:15.893: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2442 scale rc update-demo-nautilus --replicas=2 --timeout=5m' May 20 22:02:16.106: INFO: stderr: "" May 20 22:02:16.106: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 20 22:02:16.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2442 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' May 20 22:02:16.284: INFO: stderr: "" May 20 22:02:16.284: INFO: stdout: "update-demo-nautilus-4v9hj update-demo-nautilus-m8lh5 " May 20 22:02:16.284: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2442 get pods update-demo-nautilus-4v9hj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 20 22:02:16.452: INFO: stderr: "" May 20 22:02:16.452: INFO: stdout: "true" May 20 22:02:16.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2442 get pods update-demo-nautilus-4v9hj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' May 20 22:02:16.606: INFO: stderr: "" May 20 22:02:16.606: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" May 20 22:02:16.606: INFO: validating pod update-demo-nautilus-4v9hj May 20 22:02:16.609: INFO: got data: { "image": "nautilus.jpg" } May 20 22:02:16.610: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 20 22:02:16.610: INFO: update-demo-nautilus-4v9hj is verified up and running May 20 22:02:16.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2442 get pods update-demo-nautilus-m8lh5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 20 22:02:16.782: INFO: stderr: "" May 20 22:02:16.782: INFO: stdout: "" May 20 22:02:16.782: INFO: update-demo-nautilus-m8lh5 is created but not running May 20 22:02:21.783: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2442 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' May 20 22:02:21.976: INFO: stderr: "" May 20 22:02:21.976: INFO: stdout: "update-demo-nautilus-4v9hj update-demo-nautilus-m8lh5 " May 20 22:02:21.976: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2442 get pods update-demo-nautilus-4v9hj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 20 22:02:22.154: INFO: stderr: "" May 20 22:02:22.154: INFO: stdout: "true" May 20 22:02:22.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2442 get pods update-demo-nautilus-4v9hj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' May 20 22:02:22.329: INFO: stderr: "" May 20 22:02:22.329: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" May 20 22:02:22.329: INFO: validating pod update-demo-nautilus-4v9hj May 20 22:02:22.333: INFO: got data: { "image": "nautilus.jpg" } May 20 22:02:22.333: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 20 22:02:22.333: INFO: update-demo-nautilus-4v9hj is verified up and running May 20 22:02:22.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2442 get pods update-demo-nautilus-m8lh5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 20 22:02:22.500: INFO: stderr: "" May 20 22:02:22.500: INFO: stdout: "" May 20 22:02:22.500: INFO: update-demo-nautilus-m8lh5 is created but not running May 20 22:02:27.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2442 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' May 20 22:02:27.672: INFO: stderr: "" May 20 22:02:27.672: INFO: stdout: "update-demo-nautilus-4v9hj update-demo-nautilus-m8lh5 " May 20 22:02:27.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2442 get pods update-demo-nautilus-4v9hj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 20 22:02:27.855: INFO: stderr: "" May 20 22:02:27.855: INFO: stdout: "true" May 20 22:02:27.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2442 get pods update-demo-nautilus-4v9hj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' May 20 22:02:28.028: INFO: stderr: "" May 20 22:02:28.028: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" May 20 22:02:28.028: INFO: validating pod update-demo-nautilus-4v9hj May 20 22:02:28.031: INFO: got data: { "image": "nautilus.jpg" } May 20 22:02:28.031: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 20 22:02:28.031: INFO: update-demo-nautilus-4v9hj is verified up and running May 20 22:02:28.031: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2442 get pods update-demo-nautilus-m8lh5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 20 22:02:28.203: INFO: stderr: "" May 20 22:02:28.203: INFO: stdout: "true" May 20 22:02:28.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2442 get pods update-demo-nautilus-m8lh5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' May 20 22:02:28.378: INFO: stderr: "" May 20 22:02:28.378: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" May 20 22:02:28.378: INFO: validating pod update-demo-nautilus-m8lh5 May 20 22:02:28.381: INFO: got data: { "image": "nautilus.jpg" } May 20 22:02:28.381: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 20 22:02:28.381: INFO: update-demo-nautilus-m8lh5 is verified up and running STEP: using delete to clean up resources May 20 22:02:28.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2442 delete --grace-period=0 --force -f -' May 20 22:02:28.519: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 20 22:02:28.519: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 20 22:02:28.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2442 get rc,svc -l name=update-demo --no-headers' May 20 22:02:28.724: INFO: stderr: "No resources found in kubectl-2442 namespace.\n" May 20 22:02:28.724: INFO: stdout: "" May 20 22:02:28.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2442 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 20 22:02:28.901: INFO: stderr: "" May 20 22:02:28.901: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:02:28.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2442" for this suite. • [SLOW TEST:31.186 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:291 should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":-1,"completed":1,"skipped":40,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:02:25.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0666 on node default medium May 20 22:02:25.424: INFO: Waiting up to 5m0s for pod "pod-440716b8-f6ec-446b-b635-47ac97e709d2" in namespace "emptydir-3381" to be "Succeeded or Failed" May 20 22:02:25.426: INFO: Pod "pod-440716b8-f6ec-446b-b635-47ac97e709d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.189301ms May 20 22:02:27.430: INFO: Pod "pod-440716b8-f6ec-446b-b635-47ac97e709d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006074451s May 20 22:02:29.435: INFO: Pod "pod-440716b8-f6ec-446b-b635-47ac97e709d2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011229991s May 20 22:02:31.441: INFO: Pod "pod-440716b8-f6ec-446b-b635-47ac97e709d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016411359s STEP: Saw pod success May 20 22:02:31.441: INFO: Pod "pod-440716b8-f6ec-446b-b635-47ac97e709d2" satisfied condition "Succeeded or Failed" May 20 22:02:31.443: INFO: Trying to get logs from node node1 pod pod-440716b8-f6ec-446b-b635-47ac97e709d2 container test-container: STEP: delete the pod May 20 22:02:31.458: INFO: Waiting for pod pod-440716b8-f6ec-446b-b635-47ac97e709d2 to disappear May 20 22:02:31.460: INFO: Pod pod-440716b8-f6ec-446b-b635-47ac97e709d2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:02:31.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3381" for this suite. • [SLOW TEST:6.079 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":77,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:02:28.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-c3c455b2-e1c5-4de5-a6c4-329fe5af9527 STEP: Creating a pod to test consume configMaps May 20 22:02:28.961: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-09590837-8a36-46a9-859c-d0766cbfd999" in namespace "projected-7236" to be "Succeeded or Failed" May 20 22:02:28.964: INFO: Pod "pod-projected-configmaps-09590837-8a36-46a9-859c-d0766cbfd999": Phase="Pending", Reason="", readiness=false. Elapsed: 2.893847ms May 20 22:02:30.967: INFO: Pod "pod-projected-configmaps-09590837-8a36-46a9-859c-d0766cbfd999": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006213067s May 20 22:02:32.971: INFO: Pod "pod-projected-configmaps-09590837-8a36-46a9-859c-d0766cbfd999": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010107582s May 20 22:02:34.978: INFO: Pod "pod-projected-configmaps-09590837-8a36-46a9-859c-d0766cbfd999": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016629058s STEP: Saw pod success May 20 22:02:34.978: INFO: Pod "pod-projected-configmaps-09590837-8a36-46a9-859c-d0766cbfd999" satisfied condition "Succeeded or Failed" May 20 22:02:34.981: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-09590837-8a36-46a9-859c-d0766cbfd999 container projected-configmap-volume-test: STEP: delete the pod May 20 22:02:34.993: INFO: Waiting for pod pod-projected-configmaps-09590837-8a36-46a9-859c-d0766cbfd999 to disappear May 20 22:02:34.995: INFO: Pod pod-projected-configmaps-09590837-8a36-46a9-859c-d0766cbfd999 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:02:34.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7236" for this suite. • [SLOW TEST:6.081 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:02:27.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test substitution in container's args May 20 22:02:27.544: INFO: Waiting up to 5m0s for pod "var-expansion-8b799970-2eeb-440c-9676-501cf95dfd47" in namespace "var-expansion-2140" to be "Succeeded or Failed" May 20 22:02:27.547: INFO: Pod "var-expansion-8b799970-2eeb-440c-9676-501cf95dfd47": Phase="Pending", Reason="", readiness=false. Elapsed: 2.605994ms May 20 22:02:29.551: INFO: Pod "var-expansion-8b799970-2eeb-440c-9676-501cf95dfd47": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006735814s May 20 22:02:31.555: INFO: Pod "var-expansion-8b799970-2eeb-440c-9676-501cf95dfd47": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010697941s May 20 22:02:33.560: INFO: Pod "var-expansion-8b799970-2eeb-440c-9676-501cf95dfd47": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015740555s May 20 22:02:35.567: INFO: Pod "var-expansion-8b799970-2eeb-440c-9676-501cf95dfd47": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.022241433s STEP: Saw pod success May 20 22:02:35.567: INFO: Pod "var-expansion-8b799970-2eeb-440c-9676-501cf95dfd47" satisfied condition "Succeeded or Failed" May 20 22:02:35.569: INFO: Trying to get logs from node node1 pod var-expansion-8b799970-2eeb-440c-9676-501cf95dfd47 container dapi-container: STEP: delete the pod May 20 22:02:35.589: INFO: Waiting for pod var-expansion-8b799970-2eeb-440c-9676-501cf95dfd47 to disappear May 20 22:02:35.591: INFO: Pod var-expansion-8b799970-2eeb-440c-9676-501cf95dfd47 no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:02:35.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2140" for this suite. • [SLOW TEST:8.096 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":40,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:02:31.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-fca80efa-67d9-44f4-ba5d-5ed71b85bc94 STEP: Creating a pod to test consume configMaps May 20 22:02:31.570: INFO: Waiting up to 5m0s for pod "pod-configmaps-1ac6db1e-48a5-4bff-8051-cdde45e7b67d" in namespace "configmap-1681" to be "Succeeded or Failed" May 20 22:02:31.573: INFO: Pod "pod-configmaps-1ac6db1e-48a5-4bff-8051-cdde45e7b67d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07122ms May 20 22:02:33.577: INFO: Pod "pod-configmaps-1ac6db1e-48a5-4bff-8051-cdde45e7b67d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006652248s May 20 22:02:35.581: INFO: Pod "pod-configmaps-1ac6db1e-48a5-4bff-8051-cdde45e7b67d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01092334s STEP: Saw pod success May 20 22:02:35.581: INFO: Pod "pod-configmaps-1ac6db1e-48a5-4bff-8051-cdde45e7b67d" satisfied condition "Succeeded or Failed" May 20 22:02:35.584: INFO: Trying to get logs from node node2 pod pod-configmaps-1ac6db1e-48a5-4bff-8051-cdde45e7b67d container agnhost-container: STEP: delete the pod May 20 22:02:35.595: INFO: Waiting for pod pod-configmaps-1ac6db1e-48a5-4bff-8051-cdde45e7b67d to disappear May 20 22:02:35.597: INFO: Pod pod-configmaps-1ac6db1e-48a5-4bff-8051-cdde45e7b67d no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:02:35.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1681" for this suite. •S ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":106,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:02:35.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingressclass STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingressclass.go:149 [It] should support creating IngressClass API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching May 20 22:02:35.684: INFO: starting watch STEP: patching STEP: updating May 20 22:02:35.692: INFO: waiting for watch events with expected annotations May 20 22:02:35.692: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:02:35.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingressclass-4186" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":-1,"completed":7,"skipped":123,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:02:03.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Performing setup for networking test in namespace pod-network-test-2681 STEP: creating a selector STEP: Creating the service pods in kubernetes May 20 22:02:03.849: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 20 22:02:03.880: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 20 22:02:05.883: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 20 22:02:07.882: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 20 22:02:09.884: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 22:02:11.883: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 22:02:13.883: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 22:02:15.883: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 22:02:17.883: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 22:02:19.884: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 22:02:21.884: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 22:02:23.885: INFO: The status of Pod netserver-0 is Running (Ready = true) May 20 22:02:23.890: INFO: The status of Pod netserver-1 is Running (Ready = false) May 20 22:02:25.893: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 20 22:02:33.925: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 May 20 22:02:33.925: INFO: Going to poll 10.244.4.177 on port 8081 at least 0 times, with a maximum of 34 tries before failing May 20 22:02:33.928: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.4.177 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2681 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 22:02:33.928: INFO: >>> kubeConfig: /root/.kube/config May 20 22:02:35.011: INFO: Found all 1 expected endpoints: [netserver-0] May 20 22:02:35.011: INFO: Going to poll 10.244.3.192 on port 8081 at least 0 times, with a maximum of 34 tries before failing May 20 22:02:35.014: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.3.192 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2681 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 22:02:35.014: INFO: >>> kubeConfig: /root/.kube/config May 20 22:02:36.170: INFO: Found all 1 expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:02:36.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2681" for this suite. • [SLOW TEST:32.361 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":10,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:02:35.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 20 22:02:35.636: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 20 22:02:40.642: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 20 22:02:42.650: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 May 20 22:02:42.670: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-9060 03448a8a-9c4c-4820-8a5b-1021c68ec917 33062 1 2022-05-20 22:02:42 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2022-05-20 22:02:42 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002e7a4a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} May 20 22:02:42.672: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. May 20 22:02:42.672: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": May 20 22:02:42.673: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-9060 f38fbe23-f811-4d9d-8ba8-f228cd4a9369 33063 1 2022-05-20 22:02:35 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 03448a8a-9c4c-4820-8a5b-1021c68ec917 0xc002e7abb7 0xc002e7abb8}] [] [{e2e.test Update apps/v1 2022-05-20 22:02:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2022-05-20 22:02:42 +0000 UTC FieldsV1 {"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"03448a8a-9c4c-4820-8a5b-1021c68ec917\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002e7ad58 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 20 22:02:42.675: INFO: Pod "test-cleanup-controller-4r96n" is available: &Pod{ObjectMeta:{test-cleanup-controller-4r96n test-cleanup-controller- deployment-9060 8b6786bc-d723-4e57-80bf-6c0599b5c36a 33034 0 2022-05-20 22:02:35 +0000 UTC map[name:cleanup-pod pod:httpd] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.206" ], "mac": "5e:ae:45:6f:ee:53", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.206" ], "mac": "5e:ae:45:6f:ee:53", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-cleanup-controller f38fbe23-f811-4d9d-8ba8-f228cd4a9369 0xc002e7b497 0xc002e7b498}] [] [{kube-controller-manager Update v1 2022-05-20 22:02:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f38fbe23-f811-4d9d-8ba8-f228cd4a9369\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-05-20 22:02:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-05-20 22:02:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.206\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-mrbwv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mrbwv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:02:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:02:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:02:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:02:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.3.206,StartTime:2022-05-20 22:02:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-05-20 22:02:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://349968f495a8079ecb085368e51fb27b284d49e4515b514a8d4aba2183fb2c02,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.206,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:02:42.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9060" for this suite. • [SLOW TEST:7.073 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:02:35.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0777 on node default medium May 20 22:02:35.829: INFO: Waiting up to 5m0s for pod "pod-3ef9c803-b51a-4408-b9c4-9be829c5e87f" in namespace "emptydir-9529" to be "Succeeded or Failed" May 20 22:02:35.831: INFO: Pod "pod-3ef9c803-b51a-4408-b9c4-9be829c5e87f": Phase="Pending", Reason="", readiness=false. Elapsed: 1.923068ms May 20 22:02:37.834: INFO: Pod "pod-3ef9c803-b51a-4408-b9c4-9be829c5e87f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004895689s May 20 22:02:39.838: INFO: Pod "pod-3ef9c803-b51a-4408-b9c4-9be829c5e87f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009413028s May 20 22:02:41.844: INFO: Pod "pod-3ef9c803-b51a-4408-b9c4-9be829c5e87f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014924995s May 20 22:02:43.848: INFO: Pod "pod-3ef9c803-b51a-4408-b9c4-9be829c5e87f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.019759763s STEP: Saw pod success May 20 22:02:43.849: INFO: Pod "pod-3ef9c803-b51a-4408-b9c4-9be829c5e87f" satisfied condition "Succeeded or Failed" May 20 22:02:43.851: INFO: Trying to get logs from node node2 pod pod-3ef9c803-b51a-4408-b9c4-9be829c5e87f container test-container: STEP: delete the pod May 20 22:02:43.868: INFO: Waiting for pod pod-3ef9c803-b51a-4408-b9c4-9be829c5e87f to disappear May 20 22:02:43.870: INFO: Pod pod-3ef9c803-b51a-4408-b9c4-9be829c5e87f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:02:43.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9529" for this suite. • [SLOW TEST:8.083 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":158,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:02:43.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename server-version STEP: Waiting for a default service account to be provisioned in namespace [It] should find the server version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Request ServerVersion STEP: Confirm major version May 20 22:02:43.929: INFO: Major version: 1 STEP: Confirm minor version May 20 22:02:43.929: INFO: cleanMinorVersion: 21 May 20 22:02:43.929: INFO: Minor version: 21 [AfterEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:02:43.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "server-version-5315" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":-1,"completed":9,"skipped":168,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:02:43.961: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 20 22:02:43.989: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8852 efc40615-6225-4e34-b17e-ba86ad3ed3bf 33113 0 2022-05-20 22:02:43 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2022-05-20 22:02:43 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 20 22:02:43.989: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8852 efc40615-6225-4e34-b17e-ba86ad3ed3bf 33114 0 2022-05-20 22:02:43 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2022-05-20 22:02:43 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 20 22:02:44.000: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8852 efc40615-6225-4e34-b17e-ba86ad3ed3bf 33115 0 2022-05-20 22:02:43 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2022-05-20 22:02:43 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 20 22:02:44.000: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8852 efc40615-6225-4e34-b17e-ba86ad3ed3bf 33116 0 2022-05-20 22:02:43 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2022-05-20 22:02:43 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:02:44.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8852" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":-1,"completed":10,"skipped":178,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:02:17.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod liveness-38a7bfd0-47db-49b3-b265-d01f24bae491 in namespace container-probe-8049 May 20 22:02:21.359: INFO: Started pod liveness-38a7bfd0-47db-49b3-b265-d01f24bae491 in namespace container-probe-8049 STEP: checking the pod's current state and verifying that restartCount is present May 20 22:02:21.362: INFO: Initial restart count of pod liveness-38a7bfd0-47db-49b3-b265-d01f24bae491 is 0 May 20 22:02:47.452: INFO: Restart count of pod container-probe-8049/liveness-38a7bfd0-47db-49b3-b265-d01f24bae491 is now 1 (26.090563502s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:02:47.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8049" for this suite. • [SLOW TEST:30.174 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":114,"failed":0} SSSSSSSSS ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":-1,"completed":6,"skipped":43,"failed":0} [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:02:42.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 20 22:02:42.711: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:02:48.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3604" for this suite. • [SLOW TEST:5.566 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":-1,"completed":7,"skipped":43,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:02:36.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-7b5c092e-ece1-49e0-ab6f-e043536020ee STEP: Creating a pod to test consume secrets May 20 22:02:36.229: INFO: Waiting up to 5m0s for pod "pod-secrets-6413dc17-9f20-43ec-99b4-fa81ad6fe7ee" in namespace "secrets-1775" to be "Succeeded or Failed" May 20 22:02:36.232: INFO: Pod "pod-secrets-6413dc17-9f20-43ec-99b4-fa81ad6fe7ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.649184ms May 20 22:02:38.235: INFO: Pod "pod-secrets-6413dc17-9f20-43ec-99b4-fa81ad6fe7ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006092607s May 20 22:02:40.240: INFO: Pod "pod-secrets-6413dc17-9f20-43ec-99b4-fa81ad6fe7ee": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010348159s May 20 22:02:42.246: INFO: Pod "pod-secrets-6413dc17-9f20-43ec-99b4-fa81ad6fe7ee": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016867493s May 20 22:02:44.250: INFO: Pod "pod-secrets-6413dc17-9f20-43ec-99b4-fa81ad6fe7ee": Phase="Pending", Reason="", readiness=false. Elapsed: 8.021137911s May 20 22:02:46.254: INFO: Pod "pod-secrets-6413dc17-9f20-43ec-99b4-fa81ad6fe7ee": Phase="Pending", Reason="", readiness=false. Elapsed: 10.024694872s May 20 22:02:48.257: INFO: Pod "pod-secrets-6413dc17-9f20-43ec-99b4-fa81ad6fe7ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.027591697s STEP: Saw pod success May 20 22:02:48.257: INFO: Pod "pod-secrets-6413dc17-9f20-43ec-99b4-fa81ad6fe7ee" satisfied condition "Succeeded or Failed" May 20 22:02:48.259: INFO: Trying to get logs from node node2 pod pod-secrets-6413dc17-9f20-43ec-99b4-fa81ad6fe7ee container secret-volume-test: STEP: delete the pod May 20 22:02:48.270: INFO: Waiting for pod pod-secrets-6413dc17-9f20-43ec-99b4-fa81ad6fe7ee to disappear May 20 22:02:48.272: INFO: Pod pod-secrets-6413dc17-9f20-43ec-99b4-fa81ad6fe7ee no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:02:48.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1775" for this suite. • [SLOW TEST:12.087 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":14,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:02:25.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2690.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2690.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2690.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2690.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2690.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-2690.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2690.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-2690.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2690.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-2690.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2690.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-2690.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2690.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 195.25.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.25.195_udp@PTR;check="$$(dig +tcp +noall +answer +search 195.25.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.25.195_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2690.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2690.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2690.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2690.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2690.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-2690.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2690.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-2690.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2690.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-2690.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2690.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-2690.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2690.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 195.25.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.25.195_udp@PTR;check="$$(dig +tcp +noall +answer +search 195.25.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.25.195_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 20 22:02:44.006: INFO: Unable to read wheezy_udp@dns-test-service.dns-2690.svc.cluster.local from pod dns-2690/dns-test-0fd17f2d-f859-4e5d-9934-aa79b5394c1f: the server could not find the requested resource (get pods dns-test-0fd17f2d-f859-4e5d-9934-aa79b5394c1f) May 20 22:02:44.011: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2690.svc.cluster.local from pod dns-2690/dns-test-0fd17f2d-f859-4e5d-9934-aa79b5394c1f: the server could not find the requested resource (get pods dns-test-0fd17f2d-f859-4e5d-9934-aa79b5394c1f) May 20 22:02:44.017: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2690.svc.cluster.local from pod dns-2690/dns-test-0fd17f2d-f859-4e5d-9934-aa79b5394c1f: the server could not find the requested resource (get pods dns-test-0fd17f2d-f859-4e5d-9934-aa79b5394c1f) May 20 22:02:44.028: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2690.svc.cluster.local from pod dns-2690/dns-test-0fd17f2d-f859-4e5d-9934-aa79b5394c1f: the server could not find the requested resource (get pods dns-test-0fd17f2d-f859-4e5d-9934-aa79b5394c1f) May 20 22:02:44.063: INFO: Unable to read jessie_udp@dns-test-service.dns-2690.svc.cluster.local from pod dns-2690/dns-test-0fd17f2d-f859-4e5d-9934-aa79b5394c1f: the server could not find the requested resource (get pods dns-test-0fd17f2d-f859-4e5d-9934-aa79b5394c1f) May 20 22:02:44.065: INFO: Unable to read jessie_tcp@dns-test-service.dns-2690.svc.cluster.local from pod dns-2690/dns-test-0fd17f2d-f859-4e5d-9934-aa79b5394c1f: the server could not find the requested resource (get pods dns-test-0fd17f2d-f859-4e5d-9934-aa79b5394c1f) May 20 22:02:44.068: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2690.svc.cluster.local from pod dns-2690/dns-test-0fd17f2d-f859-4e5d-9934-aa79b5394c1f: the server could not find the requested resource (get pods dns-test-0fd17f2d-f859-4e5d-9934-aa79b5394c1f) May 20 22:02:44.070: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2690.svc.cluster.local from pod dns-2690/dns-test-0fd17f2d-f859-4e5d-9934-aa79b5394c1f: the server could not find the requested resource (get pods dns-test-0fd17f2d-f859-4e5d-9934-aa79b5394c1f) May 20 22:02:44.088: INFO: Lookups using dns-2690/dns-test-0fd17f2d-f859-4e5d-9934-aa79b5394c1f failed for: [wheezy_udp@dns-test-service.dns-2690.svc.cluster.local wheezy_tcp@dns-test-service.dns-2690.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2690.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2690.svc.cluster.local jessie_udp@dns-test-service.dns-2690.svc.cluster.local jessie_tcp@dns-test-service.dns-2690.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2690.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2690.svc.cluster.local] May 20 22:02:49.139: INFO: DNS probes using dns-2690/dns-test-0fd17f2d-f859-4e5d-9934-aa79b5394c1f succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:02:49.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2690" for this suite. • [SLOW TEST:23.227 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":-1,"completed":3,"skipped":36,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":44,"failed":0} [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:02:35.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:77 May 20 22:02:35.029: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the sample API server. May 20 22:02:35.332: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set May 20 22:02:37.357: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788680955, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788680955, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788680955, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788680955, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} May 20 22:02:39.366: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788680955, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788680955, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788680955, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788680955, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} May 20 22:02:41.361: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788680955, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788680955, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788680955, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788680955, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} May 20 22:02:43.362: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788680955, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788680955, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788680955, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788680955, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} May 20 22:02:45.361: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788680955, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788680955, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788680955, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788680955, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} May 20 22:02:47.364: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788680955, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788680955, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788680955, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788680955, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} May 20 22:02:50.180: INFO: Waited 813.281093ms for the sample-apiserver to be ready to handle requests. STEP: Read Status for v1alpha1.wardle.example.com STEP: kubectl patch apiservice v1alpha1.wardle.example.com -p '{"spec":{"versionPriority": 400}}' STEP: List APIServices May 20 22:02:50.581: INFO: Found v1alpha1.wardle.example.com in APIServiceList [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:68 [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:02:51.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-2665" for this suite. • [SLOW TEST:16.468 seconds] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":3,"skipped":44,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:02:49.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 20 22:02:49.258: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota May 20 22:02:51.282: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:02:52.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9967" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":-1,"completed":4,"skipped":66,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:02:48.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin May 20 22:02:48.329: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a1a22c6c-46f7-4c1f-bc9f-0982ec81e75f" in namespace "projected-143" to be "Succeeded or Failed" May 20 22:02:48.333: INFO: Pod "downwardapi-volume-a1a22c6c-46f7-4c1f-bc9f-0982ec81e75f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.345856ms May 20 22:02:50.336: INFO: Pod "downwardapi-volume-a1a22c6c-46f7-4c1f-bc9f-0982ec81e75f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006799202s May 20 22:02:52.339: INFO: Pod "downwardapi-volume-a1a22c6c-46f7-4c1f-bc9f-0982ec81e75f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010696764s May 20 22:02:54.344: INFO: Pod "downwardapi-volume-a1a22c6c-46f7-4c1f-bc9f-0982ec81e75f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01542306s STEP: Saw pod success May 20 22:02:54.344: INFO: Pod "downwardapi-volume-a1a22c6c-46f7-4c1f-bc9f-0982ec81e75f" satisfied condition "Succeeded or Failed" May 20 22:02:54.347: INFO: Trying to get logs from node node2 pod downwardapi-volume-a1a22c6c-46f7-4c1f-bc9f-0982ec81e75f container client-container: STEP: delete the pod May 20 22:02:54.359: INFO: Waiting for pod downwardapi-volume-a1a22c6c-46f7-4c1f-bc9f-0982ec81e75f to disappear May 20 22:02:54.362: INFO: Pod downwardapi-volume-a1a22c6c-46f7-4c1f-bc9f-0982ec81e75f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:02:54.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-143" for this suite. • [SLOW TEST:6.078 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":19,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:02:54.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should provide secure master service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:02:54.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-40" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • ------------------------------ {"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":-1,"completed":6,"skipped":82,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:02:51.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-66e6b20a-14f0-4d2a-8033-ad7bce5895f9 STEP: Creating a pod to test consume configMaps May 20 22:02:51.542: INFO: Waiting up to 5m0s for pod "pod-configmaps-d6fec1f7-9582-41fc-a800-ebd02fa76496" in namespace "configmap-9967" to be "Succeeded or Failed" May 20 22:02:51.548: INFO: Pod "pod-configmaps-d6fec1f7-9582-41fc-a800-ebd02fa76496": Phase="Pending", Reason="", readiness=false. Elapsed: 5.807733ms May 20 22:02:53.551: INFO: Pod "pod-configmaps-d6fec1f7-9582-41fc-a800-ebd02fa76496": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009084246s May 20 22:02:55.557: INFO: Pod "pod-configmaps-d6fec1f7-9582-41fc-a800-ebd02fa76496": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014668853s STEP: Saw pod success May 20 22:02:55.557: INFO: Pod "pod-configmaps-d6fec1f7-9582-41fc-a800-ebd02fa76496" satisfied condition "Succeeded or Failed" May 20 22:02:55.559: INFO: Trying to get logs from node node1 pod pod-configmaps-d6fec1f7-9582-41fc-a800-ebd02fa76496 container configmap-volume-test: STEP: delete the pod May 20 22:02:55.666: INFO: Waiting for pod pod-configmaps-d6fec1f7-9582-41fc-a800-ebd02fa76496 to disappear May 20 22:02:55.668: INFO: Pod pod-configmaps-d6fec1f7-9582-41fc-a800-ebd02fa76496 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:02:55.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9967" for this suite. • ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:02:52.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin May 20 22:02:52.386: INFO: Waiting up to 5m0s for pod "downwardapi-volume-efcc03f1-2372-4a29-9888-0101fa647b9f" in namespace "downward-api-8069" to be "Succeeded or Failed" May 20 22:02:52.388: INFO: Pod "downwardapi-volume-efcc03f1-2372-4a29-9888-0101fa647b9f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220242ms May 20 22:02:54.392: INFO: Pod "downwardapi-volume-efcc03f1-2372-4a29-9888-0101fa647b9f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005954737s May 20 22:02:56.396: INFO: Pod "downwardapi-volume-efcc03f1-2372-4a29-9888-0101fa647b9f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009978834s STEP: Saw pod success May 20 22:02:56.396: INFO: Pod "downwardapi-volume-efcc03f1-2372-4a29-9888-0101fa647b9f" satisfied condition "Succeeded or Failed" May 20 22:02:56.399: INFO: Trying to get logs from node node2 pod downwardapi-volume-efcc03f1-2372-4a29-9888-0101fa647b9f container client-container: STEP: delete the pod May 20 22:02:56.410: INFO: Waiting for pod downwardapi-volume-efcc03f1-2372-4a29-9888-0101fa647b9f to disappear May 20 22:02:56.412: INFO: Pod downwardapi-volume-efcc03f1-2372-4a29-9888-0101fa647b9f no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:02:56.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8069" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":87,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:01:57.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:02:57.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1765" for this suite. • [SLOW TEST:60.111 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":17,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:02:56.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin May 20 22:02:56.524: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d9117bd5-4b05-4a4d-8ae0-c5a563403e1a" in namespace "projected-1195" to be "Succeeded or Failed" May 20 22:02:56.527: INFO: Pod "downwardapi-volume-d9117bd5-4b05-4a4d-8ae0-c5a563403e1a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.421666ms May 20 22:02:58.532: INFO: Pod "downwardapi-volume-d9117bd5-4b05-4a4d-8ae0-c5a563403e1a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008355754s May 20 22:03:00.536: INFO: Pod "downwardapi-volume-d9117bd5-4b05-4a4d-8ae0-c5a563403e1a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012030544s STEP: Saw pod success May 20 22:03:00.536: INFO: Pod "downwardapi-volume-d9117bd5-4b05-4a4d-8ae0-c5a563403e1a" satisfied condition "Succeeded or Failed" May 20 22:03:00.539: INFO: Trying to get logs from node node2 pod downwardapi-volume-d9117bd5-4b05-4a4d-8ae0-c5a563403e1a container client-container: STEP: delete the pod May 20 22:03:00.550: INFO: Waiting for pod downwardapi-volume-d9117bd5-4b05-4a4d-8ae0-c5a563403e1a to disappear May 20 22:03:00.554: INFO: Pod downwardapi-volume-d9117bd5-4b05-4a4d-8ae0-c5a563403e1a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:03:00.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1195" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":118,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:02:47.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7527.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-7527.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7527.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7527.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-7527.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-7527.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-7527.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-7527.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7527.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7527.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-7527.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7527.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-7527.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-7527.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-7527.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-7527.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-7527.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7527.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 20 22:02:55.541: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7527.svc.cluster.local from pod dns-7527/dns-test-e30883cc-1a8b-4428-8d19-f86d2d38ab03: the server could not find the requested resource (get pods dns-test-e30883cc-1a8b-4428-8d19-f86d2d38ab03) May 20 22:02:55.544: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7527.svc.cluster.local from pod dns-7527/dns-test-e30883cc-1a8b-4428-8d19-f86d2d38ab03: the server could not find the requested resource (get pods dns-test-e30883cc-1a8b-4428-8d19-f86d2d38ab03) May 20 22:02:55.546: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7527.svc.cluster.local from pod dns-7527/dns-test-e30883cc-1a8b-4428-8d19-f86d2d38ab03: the server could not find the requested resource (get pods dns-test-e30883cc-1a8b-4428-8d19-f86d2d38ab03) May 20 22:02:55.549: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7527.svc.cluster.local from pod dns-7527/dns-test-e30883cc-1a8b-4428-8d19-f86d2d38ab03: the server could not find the requested resource (get pods dns-test-e30883cc-1a8b-4428-8d19-f86d2d38ab03) May 20 22:02:55.557: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7527.svc.cluster.local from pod dns-7527/dns-test-e30883cc-1a8b-4428-8d19-f86d2d38ab03: the server could not find the requested resource (get pods dns-test-e30883cc-1a8b-4428-8d19-f86d2d38ab03) May 20 22:02:55.560: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7527.svc.cluster.local from pod dns-7527/dns-test-e30883cc-1a8b-4428-8d19-f86d2d38ab03: the server could not find the requested resource (get pods dns-test-e30883cc-1a8b-4428-8d19-f86d2d38ab03) May 20 22:02:55.562: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7527.svc.cluster.local from pod dns-7527/dns-test-e30883cc-1a8b-4428-8d19-f86d2d38ab03: the server could not find the requested resource (get pods dns-test-e30883cc-1a8b-4428-8d19-f86d2d38ab03) May 20 22:02:55.564: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7527.svc.cluster.local from pod dns-7527/dns-test-e30883cc-1a8b-4428-8d19-f86d2d38ab03: the server could not find the requested resource (get pods dns-test-e30883cc-1a8b-4428-8d19-f86d2d38ab03) May 20 22:02:55.570: INFO: Lookups using dns-7527/dns-test-e30883cc-1a8b-4428-8d19-f86d2d38ab03 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7527.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7527.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7527.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7527.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7527.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7527.svc.cluster.local jessie_udp@dns-test-service-2.dns-7527.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7527.svc.cluster.local] May 20 22:03:00.608: INFO: DNS probes using dns-7527/dns-test-e30883cc-1a8b-4428-8d19-f86d2d38ab03 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:03:00.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7527" for this suite. • [SLOW TEST:13.139 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":-1,"completed":5,"skipped":123,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":55,"failed":0} [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:02:55.679: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication May 20 22:02:56.099: INFO: role binding webhook-auth-reader already exists STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 20 22:02:56.112: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 20 22:02:58.120: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788680976, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788680976, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788680976, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788680976, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 20 22:03:01.131: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:03:01.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5699" for this suite. STEP: Destroying namespace "webhook-5699-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.529 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":-1,"completed":5,"skipped":55,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:03:00.609: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-map-0dbd1d27-94b6-472b-b85a-73e27cadbe8b STEP: Creating a pod to test consume secrets May 20 22:03:00.645: INFO: Waiting up to 5m0s for pod "pod-secrets-48dd798e-7dd6-43dc-b634-fdf2c2496811" in namespace "secrets-1200" to be "Succeeded or Failed" May 20 22:03:00.647: INFO: Pod "pod-secrets-48dd798e-7dd6-43dc-b634-fdf2c2496811": Phase="Pending", Reason="", readiness=false. Elapsed: 2.474621ms May 20 22:03:02.652: INFO: Pod "pod-secrets-48dd798e-7dd6-43dc-b634-fdf2c2496811": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006881081s May 20 22:03:04.656: INFO: Pod "pod-secrets-48dd798e-7dd6-43dc-b634-fdf2c2496811": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011165693s STEP: Saw pod success May 20 22:03:04.656: INFO: Pod "pod-secrets-48dd798e-7dd6-43dc-b634-fdf2c2496811" satisfied condition "Succeeded or Failed" May 20 22:03:04.658: INFO: Trying to get logs from node node2 pod pod-secrets-48dd798e-7dd6-43dc-b634-fdf2c2496811 container secret-volume-test: STEP: delete the pod May 20 22:03:04.672: INFO: Waiting for pod pod-secrets-48dd798e-7dd6-43dc-b634-fdf2c2496811 to disappear May 20 22:03:04.674: INFO: Pod pod-secrets-48dd798e-7dd6-43dc-b634-fdf2c2496811 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:03:04.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1200" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":138,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:03:04.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics May 20 22:03:05.864: INFO: The status of Pod kube-controller-manager-master3 is Running (Ready = true) May 20 22:03:05.931: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:03:05.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8669" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":-1,"completed":8,"skipped":190,"failed":0} S ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:03:00.669: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication May 20 22:03:01.009: INFO: role binding webhook-auth-reader already exists STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 20 22:03:01.021: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 20 22:03:03.030: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788680981, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788680981, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788680981, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788680981, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 20 22:03:06.043: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:03:06.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-94" for this suite. STEP: Destroying namespace "webhook-94-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.449 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":-1,"completed":6,"skipped":141,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:03:05.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename certificates STEP: Waiting for a default service account to be provisioned in namespace [It] should support CSR API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/certificates.k8s.io STEP: getting /apis/certificates.k8s.io/v1 STEP: creating STEP: getting STEP: listing STEP: watching May 20 22:03:06.361: INFO: starting watch STEP: patching STEP: updating May 20 22:03:06.368: INFO: waiting for watch events with expected annotations May 20 22:03:06.368: INFO: saw patched and updated annotations STEP: getting /approval STEP: patching /approval STEP: updating /approval STEP: getting /status STEP: patching /status STEP: updating /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:03:06.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "certificates-7184" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":-1,"completed":9,"skipped":191,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:02:54.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 20 22:02:55.200: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 20 22:02:57.210: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788680975, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788680975, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788680975, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788680975, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 20 22:03:00.225: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:03:10.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8302" for this suite. STEP: Destroying namespace "webhook-8302-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.810 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":-1,"completed":7,"skipped":84,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:03:06.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin May 20 22:03:06.481: INFO: Waiting up to 5m0s for pod "downwardapi-volume-895fb09a-3a31-4412-ab3c-c3cf215fe056" in namespace "projected-4946" to be "Succeeded or Failed" May 20 22:03:06.485: INFO: Pod "downwardapi-volume-895fb09a-3a31-4412-ab3c-c3cf215fe056": Phase="Pending", Reason="", readiness=false. Elapsed: 4.534123ms May 20 22:03:08.489: INFO: Pod "downwardapi-volume-895fb09a-3a31-4412-ab3c-c3cf215fe056": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007823013s May 20 22:03:10.493: INFO: Pod "downwardapi-volume-895fb09a-3a31-4412-ab3c-c3cf215fe056": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012377191s STEP: Saw pod success May 20 22:03:10.493: INFO: Pod "downwardapi-volume-895fb09a-3a31-4412-ab3c-c3cf215fe056" satisfied condition "Succeeded or Failed" May 20 22:03:10.495: INFO: Trying to get logs from node node1 pod downwardapi-volume-895fb09a-3a31-4412-ab3c-c3cf215fe056 container client-container: STEP: delete the pod May 20 22:03:10.620: INFO: Waiting for pod downwardapi-volume-895fb09a-3a31-4412-ab3c-c3cf215fe056 to disappear May 20 22:03:10.623: INFO: Pod downwardapi-volume-895fb09a-3a31-4412-ab3c-c3cf215fe056 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:03:10.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4946" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":204,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:02:57.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 20 22:02:57.915: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 20 22:03:06.536: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7058 --namespace=crd-publish-openapi-7058 create -f -' May 20 22:03:07.008: INFO: stderr: "" May 20 22:03:07.008: INFO: stdout: "e2e-test-crd-publish-openapi-966-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 20 22:03:07.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7058 --namespace=crd-publish-openapi-7058 delete e2e-test-crd-publish-openapi-966-crds test-cr' May 20 22:03:07.170: INFO: stderr: "" May 20 22:03:07.170: INFO: stdout: "e2e-test-crd-publish-openapi-966-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" May 20 22:03:07.170: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7058 --namespace=crd-publish-openapi-7058 apply -f -' May 20 22:03:07.535: INFO: stderr: "" May 20 22:03:07.535: INFO: stdout: "e2e-test-crd-publish-openapi-966-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 20 22:03:07.535: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7058 --namespace=crd-publish-openapi-7058 delete e2e-test-crd-publish-openapi-966-crds test-cr' May 20 22:03:07.699: INFO: stderr: "" May 20 22:03:07.699: INFO: stdout: "e2e-test-crd-publish-openapi-966-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema May 20 22:03:07.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7058 explain e2e-test-crd-publish-openapi-966-crds' May 20 22:03:08.045: INFO: stderr: "" May 20 22:03:08.045: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-966-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:03:11.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7058" for this suite. • [SLOW TEST:13.793 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":-1,"completed":3,"skipped":43,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:02:44.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-projected-l4j5 STEP: Creating a pod to test atomic-volume-subpath May 20 22:02:44.078: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-l4j5" in namespace "subpath-2759" to be "Succeeded or Failed" May 20 22:02:44.080: INFO: Pod "pod-subpath-test-projected-l4j5": Phase="Pending", Reason="", readiness=false. Elapsed: 1.928293ms May 20 22:02:46.084: INFO: Pod "pod-subpath-test-projected-l4j5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005184942s May 20 22:02:48.087: INFO: Pod "pod-subpath-test-projected-l4j5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008617966s May 20 22:02:50.092: INFO: Pod "pod-subpath-test-projected-l4j5": Phase="Running", Reason="", readiness=true. Elapsed: 6.013923683s May 20 22:02:52.098: INFO: Pod "pod-subpath-test-projected-l4j5": Phase="Running", Reason="", readiness=true. Elapsed: 8.019981922s May 20 22:02:54.102: INFO: Pod "pod-subpath-test-projected-l4j5": Phase="Running", Reason="", readiness=true. Elapsed: 10.024056887s May 20 22:02:56.106: INFO: Pod "pod-subpath-test-projected-l4j5": Phase="Running", Reason="", readiness=true. Elapsed: 12.028055659s May 20 22:02:58.110: INFO: Pod "pod-subpath-test-projected-l4j5": Phase="Running", Reason="", readiness=true. Elapsed: 14.031601741s May 20 22:03:00.116: INFO: Pod "pod-subpath-test-projected-l4j5": Phase="Running", Reason="", readiness=true. Elapsed: 16.03737462s May 20 22:03:02.120: INFO: Pod "pod-subpath-test-projected-l4j5": Phase="Running", Reason="", readiness=true. Elapsed: 18.041182354s May 20 22:03:04.123: INFO: Pod "pod-subpath-test-projected-l4j5": Phase="Running", Reason="", readiness=true. Elapsed: 20.044966948s May 20 22:03:06.126: INFO: Pod "pod-subpath-test-projected-l4j5": Phase="Running", Reason="", readiness=true. Elapsed: 22.048021313s May 20 22:03:08.130: INFO: Pod "pod-subpath-test-projected-l4j5": Phase="Running", Reason="", readiness=true. Elapsed: 24.051902018s May 20 22:03:10.137: INFO: Pod "pod-subpath-test-projected-l4j5": Phase="Running", Reason="", readiness=true. Elapsed: 26.058977782s May 20 22:03:12.141: INFO: Pod "pod-subpath-test-projected-l4j5": Phase="Running", Reason="", readiness=true. Elapsed: 28.062580877s May 20 22:03:14.146: INFO: Pod "pod-subpath-test-projected-l4j5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.067847918s STEP: Saw pod success May 20 22:03:14.146: INFO: Pod "pod-subpath-test-projected-l4j5" satisfied condition "Succeeded or Failed" May 20 22:03:14.150: INFO: Trying to get logs from node node2 pod pod-subpath-test-projected-l4j5 container test-container-subpath-projected-l4j5: STEP: delete the pod May 20 22:03:14.161: INFO: Waiting for pod pod-subpath-test-projected-l4j5 to disappear May 20 22:03:14.164: INFO: Pod pod-subpath-test-projected-l4j5 no longer exists STEP: Deleting pod pod-subpath-test-projected-l4j5 May 20 22:03:14.164: INFO: Deleting pod "pod-subpath-test-projected-l4j5" in namespace "subpath-2759" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:03:14.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2759" for this suite. • [SLOW TEST:30.134 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":-1,"completed":11,"skipped":192,"failed":0} SSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:03:01.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating all guestbook components May 20 22:03:01.266: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-replica labels: app: agnhost role: replica tier: backend spec: ports: - port: 6379 selector: app: agnhost role: replica tier: backend May 20 22:03:01.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1453 create -f -' May 20 22:03:01.672: INFO: stderr: "" May 20 22:03:01.672: INFO: stdout: "service/agnhost-replica created\n" May 20 22:03:01.672: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-primary labels: app: agnhost role: primary tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: primary tier: backend May 20 22:03:01.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1453 create -f -' May 20 22:03:02.012: INFO: stderr: "" May 20 22:03:02.012: INFO: stdout: "service/agnhost-primary created\n" May 20 22:03:02.013: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 20 22:03:02.013: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1453 create -f -' May 20 22:03:02.374: INFO: stderr: "" May 20 22:03:02.374: INFO: stdout: "service/frontend created\n" May 20 22:03:02.375: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: k8s.gcr.io/e2e-test-images/agnhost:2.32 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 May 20 22:03:02.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1453 create -f -' May 20 22:03:02.684: INFO: stderr: "" May 20 22:03:02.684: INFO: stdout: "deployment.apps/frontend created\n" May 20 22:03:02.684: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-primary spec: replicas: 1 selector: matchLabels: app: agnhost role: primary tier: backend template: metadata: labels: app: agnhost role: primary tier: backend spec: containers: - name: primary image: k8s.gcr.io/e2e-test-images/agnhost:2.32 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 20 22:03:02.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1453 create -f -' May 20 22:03:03.023: INFO: stderr: "" May 20 22:03:03.023: INFO: stdout: "deployment.apps/agnhost-primary created\n" May 20 22:03:03.023: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-replica spec: replicas: 2 selector: matchLabels: app: agnhost role: replica tier: backend template: metadata: labels: app: agnhost role: replica tier: backend spec: containers: - name: replica image: k8s.gcr.io/e2e-test-images/agnhost:2.32 args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 20 22:03:03.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1453 create -f -' May 20 22:03:03.341: INFO: stderr: "" May 20 22:03:03.341: INFO: stdout: "deployment.apps/agnhost-replica created\n" STEP: validating guestbook app May 20 22:03:03.341: INFO: Waiting for all frontend pods to be Running. May 20 22:03:13.392: INFO: Waiting for frontend to serve content. May 20 22:03:13.400: INFO: Trying to add a new entry to the guestbook. May 20 22:03:14.411: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources May 20 22:03:14.418: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1453 delete --grace-period=0 --force -f -' May 20 22:03:14.562: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 20 22:03:14.563: INFO: stdout: "service \"agnhost-replica\" force deleted\n" STEP: using delete to clean up resources May 20 22:03:14.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1453 delete --grace-period=0 --force -f -' May 20 22:03:14.685: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 20 22:03:14.685: INFO: stdout: "service \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources May 20 22:03:14.686: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1453 delete --grace-period=0 --force -f -' May 20 22:03:14.826: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 20 22:03:14.826: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 20 22:03:14.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1453 delete --grace-period=0 --force -f -' May 20 22:03:14.957: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 20 22:03:14.958: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources May 20 22:03:14.958: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1453 delete --grace-period=0 --force -f -' May 20 22:03:15.100: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 20 22:03:15.100: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources May 20 22:03:15.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1453 delete --grace-period=0 --force -f -' May 20 22:03:15.243: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 20 22:03:15.243: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:03:15.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1453" for this suite. • [SLOW TEST:14.012 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:336 should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:03:06.170: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 20 22:03:15.239: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:03:15.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2338" for this suite. • [SLOW TEST:9.084 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":-1,"completed":6,"skipped":69,"failed":0} SSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":170,"failed":0} [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:03:15.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics May 20 22:03:16.326: INFO: The status of Pod kube-controller-manager-master3 is Running (Ready = true) May 20 22:03:16.392: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:03:16.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9752" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":-1,"completed":8,"skipped":170,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:03:11.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-a0a97220-363c-44cd-819d-a5147cca85f8 STEP: Creating a pod to test consume secrets May 20 22:03:11.732: INFO: Waiting up to 5m0s for pod "pod-secrets-10c71802-4ac0-49cd-b25d-442a9f717cd8" in namespace "secrets-5894" to be "Succeeded or Failed" May 20 22:03:11.734: INFO: Pod "pod-secrets-10c71802-4ac0-49cd-b25d-442a9f717cd8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109864ms May 20 22:03:13.738: INFO: Pod "pod-secrets-10c71802-4ac0-49cd-b25d-442a9f717cd8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005531289s May 20 22:03:15.742: INFO: Pod "pod-secrets-10c71802-4ac0-49cd-b25d-442a9f717cd8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009648478s May 20 22:03:17.745: INFO: Pod "pod-secrets-10c71802-4ac0-49cd-b25d-442a9f717cd8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.012580576s STEP: Saw pod success May 20 22:03:17.745: INFO: Pod "pod-secrets-10c71802-4ac0-49cd-b25d-442a9f717cd8" satisfied condition "Succeeded or Failed" May 20 22:03:17.747: INFO: Trying to get logs from node node1 pod pod-secrets-10c71802-4ac0-49cd-b25d-442a9f717cd8 container secret-volume-test: STEP: delete the pod May 20 22:03:17.758: INFO: Waiting for pod pod-secrets-10c71802-4ac0-49cd-b25d-442a9f717cd8 to disappear May 20 22:03:17.760: INFO: Pod pod-secrets-10c71802-4ac0-49cd-b25d-442a9f717cd8 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:03:17.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5894" for this suite. • [SLOW TEST:6.070 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":48,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:02:25.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-1569 May 20 22:02:25.961: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) May 20 22:02:27.964: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) May 20 22:02:27.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1569 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' May 20 22:02:28.218: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" May 20 22:02:28.218: INFO: stdout: "iptables" May 20 22:02:28.218: INFO: proxyMode: iptables May 20 22:02:28.225: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 20 22:02:28.227: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-clusterip-timeout in namespace services-1569 STEP: creating replication controller affinity-clusterip-timeout in namespace services-1569 I0520 22:02:28.237431 36 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-1569, replica count: 3 I0520 22:02:31.288595 36 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0520 22:02:34.290473 36 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 20 22:02:34.296: INFO: Creating new exec pod May 20 22:02:47.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1569 exec execpod-affinity6dtsx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80' May 20 22:02:47.617: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-timeout 80\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\n" May 20 22:02:47.617: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" May 20 22:02:47.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1569 exec execpod-affinity6dtsx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.28.80 80' May 20 22:02:47.953: INFO: stderr: "+ nc -v -t -w 2 10.233.28.80 80\n+ echo hostName\nConnection to 10.233.28.80 80 port [tcp/http] succeeded!\n" May 20 22:02:47.953: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" May 20 22:02:47.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1569 exec execpod-affinity6dtsx -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.233.28.80:80/ ; done' May 20 22:02:48.277: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.28.80:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.28.80:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.28.80:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.28.80:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.28.80:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.28.80:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.28.80:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.28.80:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.28.80:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.28.80:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.28.80:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.28.80:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.28.80:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.28.80:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.28.80:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.28.80:80/\n" May 20 22:02:48.277: INFO: stdout: "\naffinity-clusterip-timeout-jrh55\naffinity-clusterip-timeout-jrh55\naffinity-clusterip-timeout-jrh55\naffinity-clusterip-timeout-jrh55\naffinity-clusterip-timeout-jrh55\naffinity-clusterip-timeout-jrh55\naffinity-clusterip-timeout-jrh55\naffinity-clusterip-timeout-jrh55\naffinity-clusterip-timeout-jrh55\naffinity-clusterip-timeout-jrh55\naffinity-clusterip-timeout-jrh55\naffinity-clusterip-timeout-jrh55\naffinity-clusterip-timeout-jrh55\naffinity-clusterip-timeout-jrh55\naffinity-clusterip-timeout-jrh55\naffinity-clusterip-timeout-jrh55" May 20 22:02:48.278: INFO: Received response from host: affinity-clusterip-timeout-jrh55 May 20 22:02:48.278: INFO: Received response from host: affinity-clusterip-timeout-jrh55 May 20 22:02:48.278: INFO: Received response from host: affinity-clusterip-timeout-jrh55 May 20 22:02:48.278: INFO: Received response from host: affinity-clusterip-timeout-jrh55 May 20 22:02:48.278: INFO: Received response from host: affinity-clusterip-timeout-jrh55 May 20 22:02:48.278: INFO: Received response from host: affinity-clusterip-timeout-jrh55 May 20 22:02:48.278: INFO: Received response from host: affinity-clusterip-timeout-jrh55 May 20 22:02:48.278: INFO: Received response from host: affinity-clusterip-timeout-jrh55 May 20 22:02:48.278: INFO: Received response from host: affinity-clusterip-timeout-jrh55 May 20 22:02:48.278: INFO: Received response from host: affinity-clusterip-timeout-jrh55 May 20 22:02:48.278: INFO: Received response from host: affinity-clusterip-timeout-jrh55 May 20 22:02:48.278: INFO: Received response from host: affinity-clusterip-timeout-jrh55 May 20 22:02:48.278: INFO: Received response from host: affinity-clusterip-timeout-jrh55 May 20 22:02:48.278: INFO: Received response from host: affinity-clusterip-timeout-jrh55 May 20 22:02:48.278: INFO: Received response from host: affinity-clusterip-timeout-jrh55 May 20 22:02:48.278: INFO: Received response from host: affinity-clusterip-timeout-jrh55 May 20 22:02:48.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1569 exec execpod-affinity6dtsx -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.233.28.80:80/' May 20 22:02:48.553: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.233.28.80:80/\n" May 20 22:02:48.553: INFO: stdout: "affinity-clusterip-timeout-jrh55" May 20 22:03:08.554: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1569 exec execpod-affinity6dtsx -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.233.28.80:80/' May 20 22:03:08.968: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.233.28.80:80/\n" May 20 22:03:08.968: INFO: stdout: "affinity-clusterip-timeout-xcjc6" May 20 22:03:08.968: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-1569, will wait for the garbage collector to delete the pods May 20 22:03:09.035: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 4.191481ms May 20 22:03:09.136: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 101.327177ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:03:25.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1569" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:59.930 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":5,"skipped":82,"failed":0} SS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:03:14.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should update/patch PodDisruptionBudget status [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Waiting for the pdb to be processed STEP: Updating PodDisruptionBudget status STEP: Waiting for all pods to be running May 20 22:03:16.228: INFO: running pods: 0 < 1 May 20 22:03:18.233: INFO: running pods: 0 < 1 May 20 22:03:20.232: INFO: running pods: 0 < 1 May 20 22:03:22.232: INFO: running pods: 0 < 1 May 20 22:03:24.233: INFO: running pods: 0 < 1 May 20 22:03:26.231: INFO: running pods: 0 < 1 STEP: locating a running pod STEP: Waiting for the pdb to be processed STEP: Patching PodDisruptionBudget status STEP: Waiting for the pdb to be processed [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:03:28.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-9645" for this suite. • [SLOW TEST:14.078 seconds] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update/patch PodDisruptionBudget status [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]","total":-1,"completed":12,"skipped":195,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:03:16.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:293 [It] should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a replication controller May 20 22:03:16.576: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2709 create -f -' May 20 22:03:16.967: INFO: stderr: "" May 20 22:03:16.967: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 20 22:03:16.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2709 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' May 20 22:03:17.144: INFO: stderr: "" May 20 22:03:17.144: INFO: stdout: "update-demo-nautilus-mmmfp update-demo-nautilus-r5rxl " May 20 22:03:17.144: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2709 get pods update-demo-nautilus-mmmfp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 20 22:03:17.306: INFO: stderr: "" May 20 22:03:17.306: INFO: stdout: "" May 20 22:03:17.306: INFO: update-demo-nautilus-mmmfp is created but not running May 20 22:03:22.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2709 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' May 20 22:03:22.469: INFO: stderr: "" May 20 22:03:22.469: INFO: stdout: "update-demo-nautilus-mmmfp update-demo-nautilus-r5rxl " May 20 22:03:22.469: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2709 get pods update-demo-nautilus-mmmfp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 20 22:03:22.641: INFO: stderr: "" May 20 22:03:22.641: INFO: stdout: "" May 20 22:03:22.641: INFO: update-demo-nautilus-mmmfp is created but not running May 20 22:03:27.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2709 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' May 20 22:03:27.811: INFO: stderr: "" May 20 22:03:27.811: INFO: stdout: "update-demo-nautilus-mmmfp update-demo-nautilus-r5rxl " May 20 22:03:27.811: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2709 get pods update-demo-nautilus-mmmfp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 20 22:03:27.970: INFO: stderr: "" May 20 22:03:27.970: INFO: stdout: "" May 20 22:03:27.970: INFO: update-demo-nautilus-mmmfp is created but not running May 20 22:03:32.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2709 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' May 20 22:03:33.152: INFO: stderr: "" May 20 22:03:33.152: INFO: stdout: "update-demo-nautilus-mmmfp update-demo-nautilus-r5rxl " May 20 22:03:33.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2709 get pods update-demo-nautilus-mmmfp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 20 22:03:33.310: INFO: stderr: "" May 20 22:03:33.311: INFO: stdout: "true" May 20 22:03:33.311: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2709 get pods update-demo-nautilus-mmmfp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' May 20 22:03:33.477: INFO: stderr: "" May 20 22:03:33.477: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" May 20 22:03:33.477: INFO: validating pod update-demo-nautilus-mmmfp May 20 22:03:33.480: INFO: got data: { "image": "nautilus.jpg" } May 20 22:03:33.480: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 20 22:03:33.480: INFO: update-demo-nautilus-mmmfp is verified up and running May 20 22:03:33.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2709 get pods update-demo-nautilus-r5rxl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 20 22:03:33.646: INFO: stderr: "" May 20 22:03:33.646: INFO: stdout: "true" May 20 22:03:33.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2709 get pods update-demo-nautilus-r5rxl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' May 20 22:03:33.795: INFO: stderr: "" May 20 22:03:33.796: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" May 20 22:03:33.796: INFO: validating pod update-demo-nautilus-r5rxl May 20 22:03:33.799: INFO: got data: { "image": "nautilus.jpg" } May 20 22:03:33.799: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 20 22:03:33.799: INFO: update-demo-nautilus-r5rxl is verified up and running STEP: using delete to clean up resources May 20 22:03:33.799: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2709 delete --grace-period=0 --force -f -' May 20 22:03:33.954: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 20 22:03:33.954: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 20 22:03:33.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2709 get rc,svc -l name=update-demo --no-headers' May 20 22:03:34.145: INFO: stderr: "No resources found in kubectl-2709 namespace.\n" May 20 22:03:34.145: INFO: stdout: "" May 20 22:03:34.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2709 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 20 22:03:34.318: INFO: stderr: "" May 20 22:03:34.318: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:03:34.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2709" for this suite. • [SLOW TEST:17.775 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:291 should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":-1,"completed":9,"skipped":243,"failed":0} [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:03:34.329: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should guarantee kube-root-ca.crt exist in any namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 20 22:03:34.354: INFO: Got root ca configmap in namespace "svcaccounts-5010" May 20 22:03:34.358: INFO: Deleted root ca configmap in namespace "svcaccounts-5010" STEP: waiting for a new root ca configmap created May 20 22:03:34.862: INFO: Recreated root ca configmap in namespace "svcaccounts-5010" May 20 22:03:34.865: INFO: Updated root ca configmap in namespace "svcaccounts-5010" STEP: waiting for the root ca configmap reconciled May 20 22:03:35.369: INFO: Reconciled root ca configmap in namespace "svcaccounts-5010" [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:03:35.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-5010" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]","total":-1,"completed":10,"skipped":243,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:02:03.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-6455 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-6455 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6455 May 20 22:02:03.800: INFO: Found 0 stateful pods, waiting for 1 May 20 22:02:13.805: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 20 22:02:13.807: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-6455 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 20 22:02:14.061: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" May 20 22:02:14.061: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 20 22:02:14.061: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 20 22:02:14.063: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 20 22:02:24.069: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 20 22:02:24.069: INFO: Waiting for statefulset status.replicas updated to 0 May 20 22:02:24.081: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999464s May 20 22:02:25.085: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.997154439s May 20 22:02:26.088: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.993050828s May 20 22:02:27.092: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.989946134s May 20 22:02:28.095: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.986775487s May 20 22:02:29.099: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.983066062s May 20 22:02:30.102: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.979560181s May 20 22:02:31.105: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.975711578s May 20 22:02:32.109: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.97333221s May 20 22:02:33.113: INFO: Verifying statefulset ss doesn't scale past 1 for another 968.792868ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6455 May 20 22:02:34.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-6455 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 20 22:02:34.439: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" May 20 22:02:34.439: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 20 22:02:34.439: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 20 22:02:34.443: INFO: Found 1 stateful pods, waiting for 3 May 20 22:02:44.447: INFO: Found 2 stateful pods, waiting for 3 May 20 22:02:54.448: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 20 22:02:54.448: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 20 22:02:54.448: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 20 22:02:54.454: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-6455 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 20 22:02:54.725: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" May 20 22:02:54.725: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 20 22:02:54.725: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 20 22:02:54.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-6455 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 20 22:02:54.974: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" May 20 22:02:54.974: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 20 22:02:54.974: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 20 22:02:54.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-6455 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 20 22:02:55.240: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" May 20 22:02:55.241: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 20 22:02:55.241: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 20 22:02:55.241: INFO: Waiting for statefulset status.replicas updated to 0 May 20 22:02:55.243: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 20 22:03:05.251: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 20 22:03:05.251: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 20 22:03:05.251: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 20 22:03:05.260: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999507s May 20 22:03:06.264: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.996447249s May 20 22:03:07.268: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.993252329s May 20 22:03:08.272: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.988617768s May 20 22:03:09.276: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.985070517s May 20 22:03:10.290: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.980619952s May 20 22:03:11.294: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.967491502s May 20 22:03:12.299: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.962961747s May 20 22:03:13.303: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.957860329s May 20 22:03:14.307: INFO: Verifying statefulset ss doesn't scale past 3 for another 953.89247ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6455 May 20 22:03:15.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-6455 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 20 22:03:15.755: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" May 20 22:03:15.755: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 20 22:03:15.755: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 20 22:03:15.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-6455 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 20 22:03:16.893: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" May 20 22:03:16.893: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 20 22:03:16.893: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 20 22:03:16.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-6455 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 20 22:03:17.323: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" May 20 22:03:17.323: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 20 22:03:17.323: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 20 22:03:17.323: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 May 20 22:03:37.337: INFO: Deleting all statefulset in ns statefulset-6455 May 20 22:03:37.340: INFO: Scaling statefulset ss to 0 May 20 22:03:37.350: INFO: Waiting for statefulset status.replicas updated to 0 May 20 22:03:37.353: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:03:37.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6455" for this suite. • [SLOW TEST:93.602 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":-1,"completed":2,"skipped":15,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:03:37.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser May 20 22:03:37.418: INFO: Waiting up to 5m0s for pod "security-context-ef111c44-eb39-40cd-aeff-bc9e262b7d08" in namespace "security-context-4661" to be "Succeeded or Failed" May 20 22:03:37.420: INFO: Pod "security-context-ef111c44-eb39-40cd-aeff-bc9e262b7d08": Phase="Pending", Reason="", readiness=false. Elapsed: 1.799152ms May 20 22:03:39.424: INFO: Pod "security-context-ef111c44-eb39-40cd-aeff-bc9e262b7d08": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005554603s May 20 22:03:41.427: INFO: Pod "security-context-ef111c44-eb39-40cd-aeff-bc9e262b7d08": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008644322s STEP: Saw pod success May 20 22:03:41.427: INFO: Pod "security-context-ef111c44-eb39-40cd-aeff-bc9e262b7d08" satisfied condition "Succeeded or Failed" May 20 22:03:41.429: INFO: Trying to get logs from node node2 pod security-context-ef111c44-eb39-40cd-aeff-bc9e262b7d08 container test-container: STEP: delete the pod May 20 22:03:41.441: INFO: Waiting for pod security-context-ef111c44-eb39-40cd-aeff-bc9e262b7d08 to disappear May 20 22:03:41.443: INFO: Pod security-context-ef111c44-eb39-40cd-aeff-bc9e262b7d08 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:03:41.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-4661" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":3,"skipped":19,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:03:17.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: set up a multi version CRD May 20 22:03:17.814: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:03:44.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6339" for this suite. • [SLOW TEST:26.406 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":-1,"completed":5,"skipped":58,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:03:44.298: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod with the kernel.shm_rmid_forced sysctl STEP: Watching for error events or started pod STEP: Waiting for pod completion STEP: Checking that the pod succeeded STEP: Getting logs from the pod STEP: Checking that the sysctl is actually updated [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:03:48.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-9694" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":6,"skipped":101,"failed":0} [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:03:48.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should test the lifecycle of an Endpoint [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating an Endpoint STEP: waiting for available Endpoint STEP: listing all Endpoints STEP: updating the Endpoint STEP: fetching the Endpoint STEP: patching the Endpoint STEP: fetching the Endpoint STEP: deleting the Endpoint by Collection STEP: waiting for Endpoint deletion STEP: fetching the Endpoint [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:03:48.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3219" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • ------------------------------ {"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":-1,"completed":7,"skipped":101,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:03:41.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange May 20 22:03:41.701: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values May 20 22:03:41.705: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] May 20 22:03:41.706: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange May 20 22:03:41.724: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] May 20 22:03:41.724: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange May 20 22:03:41.751: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] May 20 22:03:41.751: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted May 20 22:03:48.798: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:03:48.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-3435" for this suite. • [SLOW TEST:7.144 seconds] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":-1,"completed":4,"skipped":142,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:03:28.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-secret-l6cd STEP: Creating a pod to test atomic-volume-subpath May 20 22:03:28.323: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-l6cd" in namespace "subpath-13" to be "Succeeded or Failed" May 20 22:03:28.327: INFO: Pod "pod-subpath-test-secret-l6cd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052265ms May 20 22:03:30.331: INFO: Pod "pod-subpath-test-secret-l6cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007297634s May 20 22:03:32.334: INFO: Pod "pod-subpath-test-secret-l6cd": Phase="Running", Reason="", readiness=true. Elapsed: 4.010141244s May 20 22:03:34.337: INFO: Pod "pod-subpath-test-secret-l6cd": Phase="Running", Reason="", readiness=true. Elapsed: 6.013731444s May 20 22:03:36.342: INFO: Pod "pod-subpath-test-secret-l6cd": Phase="Running", Reason="", readiness=true. Elapsed: 8.018255633s May 20 22:03:38.345: INFO: Pod "pod-subpath-test-secret-l6cd": Phase="Running", Reason="", readiness=true. Elapsed: 10.022071194s May 20 22:03:40.351: INFO: Pod "pod-subpath-test-secret-l6cd": Phase="Running", Reason="", readiness=true. Elapsed: 12.027338489s May 20 22:03:42.354: INFO: Pod "pod-subpath-test-secret-l6cd": Phase="Running", Reason="", readiness=true. Elapsed: 14.031041578s May 20 22:03:44.358: INFO: Pod "pod-subpath-test-secret-l6cd": Phase="Running", Reason="", readiness=true. Elapsed: 16.034825792s May 20 22:03:46.362: INFO: Pod "pod-subpath-test-secret-l6cd": Phase="Running", Reason="", readiness=true. Elapsed: 18.039035847s May 20 22:03:48.366: INFO: Pod "pod-subpath-test-secret-l6cd": Phase="Running", Reason="", readiness=true. Elapsed: 20.042192258s May 20 22:03:50.371: INFO: Pod "pod-subpath-test-secret-l6cd": Phase="Running", Reason="", readiness=true. Elapsed: 22.047303825s May 20 22:03:52.375: INFO: Pod "pod-subpath-test-secret-l6cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.051440105s STEP: Saw pod success May 20 22:03:52.375: INFO: Pod "pod-subpath-test-secret-l6cd" satisfied condition "Succeeded or Failed" May 20 22:03:52.377: INFO: Trying to get logs from node node2 pod pod-subpath-test-secret-l6cd container test-container-subpath-secret-l6cd: STEP: delete the pod May 20 22:03:52.390: INFO: Waiting for pod pod-subpath-test-secret-l6cd to disappear May 20 22:03:52.392: INFO: Pod pod-subpath-test-secret-l6cd no longer exists STEP: Deleting pod pod-subpath-test-secret-l6cd May 20 22:03:52.392: INFO: Deleting pod "pod-subpath-test-secret-l6cd" in namespace "subpath-13" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:03:52.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-13" for this suite. • [SLOW TEST:24.119 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":-1,"completed":13,"skipped":202,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:03:48.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin May 20 22:03:48.872: INFO: Waiting up to 5m0s for pod "downwardapi-volume-74b90980-be65-44da-adf6-97e53923a456" in namespace "downward-api-5358" to be "Succeeded or Failed" May 20 22:03:48.875: INFO: Pod "downwardapi-volume-74b90980-be65-44da-adf6-97e53923a456": Phase="Pending", Reason="", readiness=false. Elapsed: 2.694654ms May 20 22:03:50.879: INFO: Pod "downwardapi-volume-74b90980-be65-44da-adf6-97e53923a456": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006789746s May 20 22:03:52.882: INFO: Pod "downwardapi-volume-74b90980-be65-44da-adf6-97e53923a456": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009930384s STEP: Saw pod success May 20 22:03:52.882: INFO: Pod "downwardapi-volume-74b90980-be65-44da-adf6-97e53923a456" satisfied condition "Succeeded or Failed" May 20 22:03:52.884: INFO: Trying to get logs from node node2 pod downwardapi-volume-74b90980-be65-44da-adf6-97e53923a456 container client-container: STEP: delete the pod May 20 22:03:52.896: INFO: Waiting for pod downwardapi-volume-74b90980-be65-44da-adf6-97e53923a456 to disappear May 20 22:03:52.898: INFO: Pod downwardapi-volume-74b90980-be65-44da-adf6-97e53923a456 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:03:52.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5358" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":151,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:03:48.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 20 22:03:48.778: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 20 22:03:50.787: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681028, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681028, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681028, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681028, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 20 22:03:52.790: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681028, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681028, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681028, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681028, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 20 22:03:55.799: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:03:56.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3839" for this suite. STEP: Destroying namespace "webhook-3839-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.360 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":-1,"completed":8,"skipped":142,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:03:56.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin May 20 22:03:56.942: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8454d6f3-2726-4dac-a29b-c57593abbd09" in namespace "downward-api-6475" to be "Succeeded or Failed" May 20 22:03:56.944: INFO: Pod "downwardapi-volume-8454d6f3-2726-4dac-a29b-c57593abbd09": Phase="Pending", Reason="", readiness=false. Elapsed: 2.440911ms May 20 22:03:58.949: INFO: Pod "downwardapi-volume-8454d6f3-2726-4dac-a29b-c57593abbd09": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007324196s May 20 22:04:00.953: INFO: Pod "downwardapi-volume-8454d6f3-2726-4dac-a29b-c57593abbd09": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011183315s STEP: Saw pod success May 20 22:04:00.953: INFO: Pod "downwardapi-volume-8454d6f3-2726-4dac-a29b-c57593abbd09" satisfied condition "Succeeded or Failed" May 20 22:04:00.956: INFO: Trying to get logs from node node1 pod downwardapi-volume-8454d6f3-2726-4dac-a29b-c57593abbd09 container client-container: STEP: delete the pod May 20 22:04:00.974: INFO: Waiting for pod downwardapi-volume-8454d6f3-2726-4dac-a29b-c57593abbd09 to disappear May 20 22:04:00.976: INFO: Pod downwardapi-volume-8454d6f3-2726-4dac-a29b-c57593abbd09 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:04:00.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6475" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":153,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:03:52.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service multi-endpoint-test in namespace services-2110 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2110 to expose endpoints map[] May 20 22:03:52.954: INFO: Failed go get Endpoints object: endpoints "multi-endpoint-test" not found May 20 22:03:53.961: INFO: successfully validated that service multi-endpoint-test in namespace services-2110 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-2110 May 20 22:03:53.976: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) May 20 22:03:55.980: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) May 20 22:03:57.981: INFO: The status of Pod pod1 is Running (Ready = true) STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2110 to expose endpoints map[pod1:[100]] May 20 22:03:57.992: INFO: successfully validated that service multi-endpoint-test in namespace services-2110 exposes endpoints map[pod1:[100]] STEP: Creating pod pod2 in namespace services-2110 May 20 22:03:58.005: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) May 20 22:04:00.010: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) May 20 22:04:02.009: INFO: The status of Pod pod2 is Running (Ready = true) STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2110 to expose endpoints map[pod1:[100] pod2:[101]] May 20 22:04:02.021: INFO: successfully validated that service multi-endpoint-test in namespace services-2110 exposes endpoints map[pod1:[100] pod2:[101]] STEP: Deleting pod pod1 in namespace services-2110 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2110 to expose endpoints map[pod2:[101]] May 20 22:04:02.049: INFO: successfully validated that service multi-endpoint-test in namespace services-2110 exposes endpoints map[pod2:[101]] STEP: Deleting pod pod2 in namespace services-2110 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2110 to expose endpoints map[] May 20 22:04:02.059: INFO: successfully validated that service multi-endpoint-test in namespace services-2110 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:04:02.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2110" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:9.151 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":-1,"completed":6,"skipped":157,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:04:02.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 20 22:04:02.444: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 20 22:04:04.457: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681042, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681042, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681042, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681042, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 20 22:04:07.466: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:04:07.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7807" for this suite. STEP: Destroying namespace "webhook-7807-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.406 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:03:25.863: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: waiting for pod running STEP: creating a file in subpath May 20 22:03:31.905: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-3400 PodName:var-expansion-b263f795-963c-4d87-bbe2-eeec715d309c ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 22:03:31.905: INFO: >>> kubeConfig: /root/.kube/config STEP: test for file in mounted path May 20 22:03:31.991: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-3400 PodName:var-expansion-b263f795-963c-4d87-bbe2-eeec715d309c ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 22:03:31.991: INFO: >>> kubeConfig: /root/.kube/config STEP: updating the annotation value May 20 22:03:32.582: INFO: Successfully updated pod "var-expansion-b263f795-963c-4d87-bbe2-eeec715d309c" STEP: waiting for annotated pod running STEP: deleting the pod gracefully May 20 22:03:32.584: INFO: Deleting pod "var-expansion-b263f795-963c-4d87-bbe2-eeec715d309c" in namespace "var-expansion-3400" May 20 22:03:32.588: INFO: Wait up to 5m0s for pod "var-expansion-b263f795-963c-4d87-bbe2-eeec715d309c" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:04:18.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3400" for this suite. • [SLOW TEST:52.741 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should succeed in writing subpaths in container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":-1,"completed":7,"skipped":183,"failed":0} [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:04:07.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:04:18.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2551" for this suite. • [SLOW TEST:11.068 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","total":-1,"completed":6,"skipped":84,"failed":0} S ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":-1,"completed":8,"skipped":183,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:04:18.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:04:18.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3880" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":184,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:04:18.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars May 20 22:04:18.657: INFO: Waiting up to 5m0s for pod "downward-api-3077db1c-b33d-420a-900b-caf1b3fcc637" in namespace "downward-api-600" to be "Succeeded or Failed" May 20 22:04:18.660: INFO: Pod "downward-api-3077db1c-b33d-420a-900b-caf1b3fcc637": Phase="Pending", Reason="", readiness=false. Elapsed: 2.613964ms May 20 22:04:20.663: INFO: Pod "downward-api-3077db1c-b33d-420a-900b-caf1b3fcc637": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005678969s May 20 22:04:22.666: INFO: Pod "downward-api-3077db1c-b33d-420a-900b-caf1b3fcc637": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008906374s STEP: Saw pod success May 20 22:04:22.666: INFO: Pod "downward-api-3077db1c-b33d-420a-900b-caf1b3fcc637" satisfied condition "Succeeded or Failed" May 20 22:04:22.669: INFO: Trying to get logs from node node2 pod downward-api-3077db1c-b33d-420a-900b-caf1b3fcc637 container dapi-container: STEP: delete the pod May 20 22:04:22.683: INFO: Waiting for pod downward-api-3077db1c-b33d-420a-900b-caf1b3fcc637 to disappear May 20 22:04:22.685: INFO: Pod downward-api-3077db1c-b33d-420a-900b-caf1b3fcc637 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:04:22.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-600" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":90,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:04:01.050: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-configmap-q2sx STEP: Creating a pod to test atomic-volume-subpath May 20 22:04:01.093: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-q2sx" in namespace "subpath-1688" to be "Succeeded or Failed" May 20 22:04:01.096: INFO: Pod "pod-subpath-test-configmap-q2sx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.663971ms May 20 22:04:03.099: INFO: Pod "pod-subpath-test-configmap-q2sx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006355503s May 20 22:04:05.104: INFO: Pod "pod-subpath-test-configmap-q2sx": Phase="Running", Reason="", readiness=true. Elapsed: 4.011177264s May 20 22:04:07.108: INFO: Pod "pod-subpath-test-configmap-q2sx": Phase="Running", Reason="", readiness=true. Elapsed: 6.015299971s May 20 22:04:09.113: INFO: Pod "pod-subpath-test-configmap-q2sx": Phase="Running", Reason="", readiness=true. Elapsed: 8.019962725s May 20 22:04:11.117: INFO: Pod "pod-subpath-test-configmap-q2sx": Phase="Running", Reason="", readiness=true. Elapsed: 10.023816132s May 20 22:04:13.121: INFO: Pod "pod-subpath-test-configmap-q2sx": Phase="Running", Reason="", readiness=true. Elapsed: 12.027824619s May 20 22:04:15.126: INFO: Pod "pod-subpath-test-configmap-q2sx": Phase="Running", Reason="", readiness=true. Elapsed: 14.032661714s May 20 22:04:17.130: INFO: Pod "pod-subpath-test-configmap-q2sx": Phase="Running", Reason="", readiness=true. Elapsed: 16.036866919s May 20 22:04:19.136: INFO: Pod "pod-subpath-test-configmap-q2sx": Phase="Running", Reason="", readiness=true. Elapsed: 18.042963608s May 20 22:04:21.139: INFO: Pod "pod-subpath-test-configmap-q2sx": Phase="Running", Reason="", readiness=true. Elapsed: 20.046351757s May 20 22:04:23.144: INFO: Pod "pod-subpath-test-configmap-q2sx": Phase="Running", Reason="", readiness=true. Elapsed: 22.050517799s May 20 22:04:25.148: INFO: Pod "pod-subpath-test-configmap-q2sx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.054605034s STEP: Saw pod success May 20 22:04:25.148: INFO: Pod "pod-subpath-test-configmap-q2sx" satisfied condition "Succeeded or Failed" May 20 22:04:25.150: INFO: Trying to get logs from node node2 pod pod-subpath-test-configmap-q2sx container test-container-subpath-configmap-q2sx: STEP: delete the pod May 20 22:04:25.162: INFO: Waiting for pod pod-subpath-test-configmap-q2sx to disappear May 20 22:04:25.164: INFO: Pod pod-subpath-test-configmap-q2sx no longer exists STEP: Deleting pod pod-subpath-test-configmap-q2sx May 20 22:04:25.164: INFO: Deleting pod "pod-subpath-test-configmap-q2sx" in namespace "subpath-1688" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:04:25.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1688" for this suite. • [SLOW TEST:24.124 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":-1,"completed":10,"skipped":182,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:04:18.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: submitting the pod to kubernetes May 20 22:04:18.724: INFO: The status of Pod pod-update-2e44e345-c720-40fa-a0f6-ddc8992b0e44 is Pending, waiting for it to be Running (with Ready = true) May 20 22:04:20.727: INFO: The status of Pod pod-update-2e44e345-c720-40fa-a0f6-ddc8992b0e44 is Pending, waiting for it to be Running (with Ready = true) May 20 22:04:22.727: INFO: The status of Pod pod-update-2e44e345-c720-40fa-a0f6-ddc8992b0e44 is Pending, waiting for it to be Running (with Ready = true) May 20 22:04:24.728: INFO: The status of Pod pod-update-2e44e345-c720-40fa-a0f6-ddc8992b0e44 is Running (Ready = true) STEP: verifying the pod is in kubernetes STEP: updating the pod May 20 22:04:25.241: INFO: Successfully updated pod "pod-update-2e44e345-c720-40fa-a0f6-ddc8992b0e44" STEP: verifying the updated pod is in kubernetes May 20 22:04:25.247: INFO: Pod update OK [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:04:25.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8028" for this suite. • [SLOW TEST:6.569 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should be updated [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":193,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:03:52.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 20 22:03:52.462: INFO: created pod May 20 22:03:52.462: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-3537" to be "Succeeded or Failed" May 20 22:03:52.464: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.444582ms May 20 22:03:54.467: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005677182s May 20 22:03:56.472: INFO: Pod "oidc-discovery-validator": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010730608s STEP: Saw pod success May 20 22:03:56.472: INFO: Pod "oidc-discovery-validator" satisfied condition "Succeeded or Failed" May 20 22:04:26.473: INFO: polling logs May 20 22:04:26.480: INFO: Pod logs: 2022/05/20 22:03:55 OK: Got token 2022/05/20 22:03:55 validating with in-cluster discovery 2022/05/20 22:03:55 OK: got issuer https://kubernetes.default.svc.cluster.local 2022/05/20 22:03:55 Full, not-validated claims: openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc.cluster.local", Subject:"system:serviceaccount:svcaccounts-3537:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1653084832, NotBefore:1653084232, IssuedAt:1653084232, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-3537", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"ad2aadab-4f2b-4a3e-9d9b-8de53f43b3d6"}}} 2022/05/20 22:03:55 OK: Constructed OIDC provider for issuer https://kubernetes.default.svc.cluster.local 2022/05/20 22:03:55 OK: Validated signature on JWT 2022/05/20 22:03:55 OK: Got valid claims from token! 2022/05/20 22:03:55 Full, validated claims: &openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc.cluster.local", Subject:"system:serviceaccount:svcaccounts-3537:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1653084832, NotBefore:1653084232, IssuedAt:1653084232, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-3537", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"ad2aadab-4f2b-4a3e-9d9b-8de53f43b3d6"}}} May 20 22:04:26.480: INFO: completed pod [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:04:26.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-3537" for this suite. • [SLOW TEST:34.070 seconds] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","total":-1,"completed":14,"skipped":211,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:04:26.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingress STEP: Waiting for a default service account to be provisioned in namespace [It] should support creating Ingress API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching May 20 22:04:26.614: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching May 20 22:04:26.621: INFO: starting watch STEP: patching STEP: updating May 20 22:04:26.633: INFO: waiting for watch events with expected annotations May 20 22:04:26.633: INFO: saw patched and updated annotations STEP: patching /status STEP: updating /status STEP: get /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:04:26.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingress-9016" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":-1,"completed":15,"skipped":242,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:04:22.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-0e1eadaa-228a-4552-a32e-0616b6101e1a STEP: Creating a pod to test consume secrets May 20 22:04:22.739: INFO: Waiting up to 5m0s for pod "pod-secrets-c1488b6d-056b-4aea-a972-24dab39822e7" in namespace "secrets-1974" to be "Succeeded or Failed" May 20 22:04:22.741: INFO: Pod "pod-secrets-c1488b6d-056b-4aea-a972-24dab39822e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.424209ms May 20 22:04:24.744: INFO: Pod "pod-secrets-c1488b6d-056b-4aea-a972-24dab39822e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004984428s May 20 22:04:26.747: INFO: Pod "pod-secrets-c1488b6d-056b-4aea-a972-24dab39822e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008054931s STEP: Saw pod success May 20 22:04:26.747: INFO: Pod "pod-secrets-c1488b6d-056b-4aea-a972-24dab39822e7" satisfied condition "Succeeded or Failed" May 20 22:04:26.749: INFO: Trying to get logs from node node2 pod pod-secrets-c1488b6d-056b-4aea-a972-24dab39822e7 container secret-env-test: STEP: delete the pod May 20 22:04:26.759: INFO: Waiting for pod pod-secrets-c1488b6d-056b-4aea-a972-24dab39822e7 to disappear May 20 22:04:26.761: INFO: Pod pod-secrets-c1488b6d-056b-4aea-a972-24dab39822e7 no longer exists [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:04:26.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1974" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":91,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:04:25.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 20 22:04:25.276: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes May 20 22:04:25.290: INFO: The status of Pod pod-logs-websocket-8422e920-378c-4718-b108-aabc19a95bfc is Pending, waiting for it to be Running (with Ready = true) May 20 22:04:27.294: INFO: The status of Pod pod-logs-websocket-8422e920-378c-4718-b108-aabc19a95bfc is Pending, waiting for it to be Running (with Ready = true) May 20 22:04:29.294: INFO: The status of Pod pod-logs-websocket-8422e920-378c-4718-b108-aabc19a95bfc is Running (Ready = true) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:04:29.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7995" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":218,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:04:26.801: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0644 on tmpfs May 20 22:04:26.835: INFO: Waiting up to 5m0s for pod "pod-a702c4da-3e09-4176-ba53-0cf35dbeb6aa" in namespace "emptydir-4420" to be "Succeeded or Failed" May 20 22:04:26.837: INFO: Pod "pod-a702c4da-3e09-4176-ba53-0cf35dbeb6aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.744244ms May 20 22:04:28.842: INFO: Pod "pod-a702c4da-3e09-4176-ba53-0cf35dbeb6aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007009607s May 20 22:04:30.845: INFO: Pod "pod-a702c4da-3e09-4176-ba53-0cf35dbeb6aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010144694s STEP: Saw pod success May 20 22:04:30.845: INFO: Pod "pod-a702c4da-3e09-4176-ba53-0cf35dbeb6aa" satisfied condition "Succeeded or Failed" May 20 22:04:30.847: INFO: Trying to get logs from node node2 pod pod-a702c4da-3e09-4176-ba53-0cf35dbeb6aa container test-container: STEP: delete the pod May 20 22:04:30.857: INFO: Waiting for pod pod-a702c4da-3e09-4176-ba53-0cf35dbeb6aa to disappear May 20 22:04:30.859: INFO: Pod pod-a702c4da-3e09-4176-ba53-0cf35dbeb6aa no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:04:30.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4420" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":104,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:04:25.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-map-736f5de6-f49b-47e8-a6e1-fbf3812af33b STEP: Creating a pod to test consume configMaps May 20 22:04:25.327: INFO: Waiting up to 5m0s for pod "pod-configmaps-0e6555c6-40db-4d8d-92ad-76e5cb07695a" in namespace "configmap-7327" to be "Succeeded or Failed" May 20 22:04:25.330: INFO: Pod "pod-configmaps-0e6555c6-40db-4d8d-92ad-76e5cb07695a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.637146ms May 20 22:04:27.332: INFO: Pod "pod-configmaps-0e6555c6-40db-4d8d-92ad-76e5cb07695a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005017268s May 20 22:04:29.336: INFO: Pod "pod-configmaps-0e6555c6-40db-4d8d-92ad-76e5cb07695a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008872566s May 20 22:04:31.339: INFO: Pod "pod-configmaps-0e6555c6-40db-4d8d-92ad-76e5cb07695a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.011962158s STEP: Saw pod success May 20 22:04:31.339: INFO: Pod "pod-configmaps-0e6555c6-40db-4d8d-92ad-76e5cb07695a" satisfied condition "Succeeded or Failed" May 20 22:04:31.342: INFO: Trying to get logs from node node2 pod pod-configmaps-0e6555c6-40db-4d8d-92ad-76e5cb07695a container agnhost-container: STEP: delete the pod May 20 22:04:31.373: INFO: Waiting for pod pod-configmaps-0e6555c6-40db-4d8d-92ad-76e5cb07695a to disappear May 20 22:04:31.375: INFO: Pod pod-configmaps-0e6555c6-40db-4d8d-92ad-76e5cb07695a no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:04:31.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7327" for this suite. • [SLOW TEST:6.097 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":206,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:04:26.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 20 22:04:26.711: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:04:32.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8635" for this suite. • [SLOW TEST:6.052 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 creating/deleting custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":-1,"completed":16,"skipped":250,"failed":0} SSSSSSS ------------------------------ [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:04:32.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 20 22:04:32.781: INFO: Creating pod... May 20 22:04:32.795: INFO: Pod Quantity: 1 Status: Pending May 20 22:04:33.799: INFO: Pod Quantity: 1 Status: Pending May 20 22:04:34.799: INFO: Pod Quantity: 1 Status: Pending May 20 22:04:35.798: INFO: Pod Status: Running May 20 22:04:35.798: INFO: Creating service... May 20 22:04:35.804: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-3579/pods/agnhost/proxy/some/path/with/DELETE May 20 22:04:35.807: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE May 20 22:04:35.807: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-3579/pods/agnhost/proxy/some/path/with/GET May 20 22:04:35.809: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET May 20 22:04:35.809: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-3579/pods/agnhost/proxy/some/path/with/HEAD May 20 22:04:35.812: INFO: http.Client request:HEAD | StatusCode:200 May 20 22:04:35.812: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-3579/pods/agnhost/proxy/some/path/with/OPTIONS May 20 22:04:35.814: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS May 20 22:04:35.814: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-3579/pods/agnhost/proxy/some/path/with/PATCH May 20 22:04:35.816: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH May 20 22:04:35.816: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-3579/pods/agnhost/proxy/some/path/with/POST May 20 22:04:35.818: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST May 20 22:04:35.818: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-3579/pods/agnhost/proxy/some/path/with/PUT May 20 22:04:35.820: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT May 20 22:04:35.820: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-3579/services/test-service/proxy/some/path/with/DELETE May 20 22:04:35.822: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE May 20 22:04:35.822: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-3579/services/test-service/proxy/some/path/with/GET May 20 22:04:35.825: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET May 20 22:04:35.825: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-3579/services/test-service/proxy/some/path/with/HEAD May 20 22:04:35.828: INFO: http.Client request:HEAD | StatusCode:200 May 20 22:04:35.828: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-3579/services/test-service/proxy/some/path/with/OPTIONS May 20 22:04:35.831: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS May 20 22:04:35.831: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-3579/services/test-service/proxy/some/path/with/PATCH May 20 22:04:35.834: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH May 20 22:04:35.834: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-3579/services/test-service/proxy/some/path/with/POST May 20 22:04:35.837: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST May 20 22:04:35.837: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-3579/services/test-service/proxy/some/path/with/PUT May 20 22:04:35.839: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT [AfterEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:04:35.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-3579" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]","total":-1,"completed":17,"skipped":257,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:03:35.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 20 22:03:35.461: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:04:36.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3637" for this suite. • [SLOW TEST:61.308 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":-1,"completed":11,"skipped":269,"failed":0} S ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:02:19.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-814 May 20 22:02:19.161: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) May 20 22:02:21.164: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) May 20 22:02:23.166: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) May 20 22:02:23.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' May 20 22:02:23.426: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" May 20 22:02:23.426: INFO: stdout: "iptables" May 20 22:02:23.426: INFO: proxyMode: iptables May 20 22:02:23.433: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 20 22:02:23.435: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-nodeport-timeout in namespace services-814 STEP: creating replication controller affinity-nodeport-timeout in namespace services-814 I0520 22:02:23.448443 28 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-814, replica count: 3 I0520 22:02:26.499924 28 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0520 22:02:29.501278 28 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0520 22:02:32.501992 28 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 20 22:02:32.510: INFO: Creating new exec pod May 20 22:02:39.536: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' May 20 22:02:39.828: INFO: stderr: "+ nc -v -t -w 2 affinity-nodeport-timeout 80\n+ echo hostName\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" May 20 22:02:39.828: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" May 20 22:02:39.828: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.1.210 80' May 20 22:02:40.157: INFO: stderr: "+ nc -v -t -w 2 10.233.1.210 80\n+ echo hostName\nConnection to 10.233.1.210 80 port [tcp/http] succeeded!\n" May 20 22:02:40.157: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" May 20 22:02:40.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:02:40.409: INFO: rc: 1 May 20 22:02:40.409: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:02:41.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:02:42.405: INFO: rc: 1 May 20 22:02:42.405: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:02:42.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:02:43.034: INFO: rc: 1 May 20 22:02:43.034: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:02:43.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:02:43.728: INFO: rc: 1 May 20 22:02:43.728: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:02:44.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:02:44.675: INFO: rc: 1 May 20 22:02:44.675: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:02:45.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:02:46.290: INFO: rc: 1 May 20 22:02:46.290: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:02:46.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:02:46.973: INFO: rc: 1 May 20 22:02:46.973: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:02:47.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:02:47.877: INFO: rc: 1 May 20 22:02:47.877: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:02:48.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:02:48.704: INFO: rc: 1 May 20 22:02:48.704: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:02:49.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:02:49.906: INFO: rc: 1 May 20 22:02:49.906: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:02:50.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:02:50.711: INFO: rc: 1 May 20 22:02:50.711: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:02:51.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:02:51.680: INFO: rc: 1 May 20 22:02:51.680: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:02:52.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:02:52.768: INFO: rc: 1 May 20 22:02:52.768: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:02:53.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:02:53.957: INFO: rc: 1 May 20 22:02:53.957: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:02:54.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:02:54.656: INFO: rc: 1 May 20 22:02:54.656: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:02:55.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:02:56.041: INFO: rc: 1 May 20 22:02:56.041: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:02:56.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:02:56.957: INFO: rc: 1 May 20 22:02:56.957: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:02:57.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:02:57.975: INFO: rc: 1 May 20 22:02:57.976: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:02:58.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:02:58.656: INFO: rc: 1 May 20 22:02:58.656: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:02:59.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:02:59.641: INFO: rc: 1 May 20 22:02:59.641: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:00.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:03:00.699: INFO: rc: 1 May 20 22:03:00.699: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:01.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:03:01.914: INFO: rc: 1 May 20 22:03:01.914: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:02.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:03:02.740: INFO: rc: 1 May 20 22:03:02.740: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:03.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:03:03.670: INFO: rc: 1 May 20 22:03:03.670: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:04.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:03:04.931: INFO: rc: 1 May 20 22:03:04.931: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:05.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:03:05.660: INFO: rc: 1 May 20 22:03:05.660: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:06.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:03:06.669: INFO: rc: 1 May 20 22:03:06.669: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:07.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:03:07.689: INFO: rc: 1 May 20 22:03:07.690: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:08.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:03:08.890: INFO: rc: 1 May 20 22:03:08.890: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:09.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:03:09.677: INFO: rc: 1 May 20 22:03:09.677: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:10.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:03:10.742: INFO: rc: 1 May 20 22:03:10.742: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:11.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:03:11.660: INFO: rc: 1 May 20 22:03:11.660: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:12.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:03:12.877: INFO: rc: 1 May 20 22:03:12.877: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:13.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:03:13.688: INFO: rc: 1 May 20 22:03:13.688: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:14.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:03:14.644: INFO: rc: 1 May 20 22:03:14.644: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:15.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:03:16.698: INFO: rc: 1 May 20 22:03:16.698: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:17.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:03:17.737: INFO: rc: 1 May 20 22:03:17.737: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:18.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:03:18.705: INFO: rc: 1 May 20 22:03:18.705: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:19.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:03:20.013: INFO: rc: 1 May 20 22:03:20.013: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:20.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:03:20.666: INFO: rc: 1 May 20 22:03:20.666: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:21.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:03:21.656: INFO: rc: 1 May 20 22:03:21.656: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:22.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:03:22.690: INFO: rc: 1 May 20 22:03:22.690: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:23.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:03:23.652: INFO: rc: 1 May 20 22:03:23.652: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:24.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:03:24.680: INFO: rc: 1 May 20 22:03:24.680: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:25.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:03:25.667: INFO: rc: 1 May 20 22:03:25.667: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:26.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:03:26.666: INFO: rc: 1 May 20 22:03:26.667: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:27.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:03:27.666: INFO: rc: 1 May 20 22:03:27.666: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:28.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:03:28.665: INFO: rc: 1 May 20 22:03:28.665: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:29.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:03:29.648: INFO: rc: 1 May 20 22:03:29.649: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:30.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:03:30.647: INFO: rc: 1 May 20 22:03:30.647: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:31.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:03:31.643: INFO: rc: 1 May 20 22:03:31.643: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:32.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:03:32.645: INFO: rc: 1 May 20 22:03:32.645: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:33.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:03:33.644: INFO: rc: 1 May 20 22:03:33.644: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:34.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:03:34.764: INFO: rc: 1 May 20 22:03:34.764: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:35.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:03:35.640: INFO: rc: 1 May 20 22:03:35.641: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:36.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:03:36.655: INFO: rc: 1 May 20 22:03:36.655: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:37.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:03:37.633: INFO: rc: 1 May 20 22:03:37.633: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:38.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:03:38.655: INFO: rc: 1 May 20 22:03:38.655: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:39.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:03:39.667: INFO: rc: 1 May 20 22:03:39.667: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:40.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:03:40.637: INFO: rc: 1 May 20 22:03:40.637: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:41.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:03:41.651: INFO: rc: 1 May 20 22:03:41.651: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:42.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:03:42.683: INFO: rc: 1 May 20 22:03:42.683: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:43.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:03:43.642: INFO: rc: 1 May 20 22:03:43.642: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:44.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:03:44.691: INFO: rc: 1 May 20 22:03:44.691: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:45.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:03:46.651: INFO: rc: 1 May 20 22:03:46.651: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:47.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:03:47.673: INFO: rc: 1 May 20 22:03:47.673: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:48.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:03:48.660: INFO: rc: 1 May 20 22:03:48.660: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:49.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:03:49.665: INFO: rc: 1 May 20 22:03:49.665: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:50.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:03:50.684: INFO: rc: 1 May 20 22:03:50.684: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:51.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:03:51.658: INFO: rc: 1 May 20 22:03:51.658: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:52.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:03:52.652: INFO: rc: 1 May 20 22:03:52.652: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:53.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:03:53.657: INFO: rc: 1 May 20 22:03:53.657: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:54.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:03:55.004: INFO: rc: 1 May 20 22:03:55.004: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:55.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:03:55.682: INFO: rc: 1 May 20 22:03:55.682: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:56.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:03:56.663: INFO: rc: 1 May 20 22:03:56.664: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:57.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:03:57.759: INFO: rc: 1 May 20 22:03:57.759: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:58.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:03:58.650: INFO: rc: 1 May 20 22:03:58.650: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:59.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:03:59.851: INFO: rc: 1 May 20 22:03:59.851: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:00.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:04:00.669: INFO: rc: 1 May 20 22:04:00.669: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:01.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:04:01.638: INFO: rc: 1 May 20 22:04:01.638: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:02.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:04:02.664: INFO: rc: 1 May 20 22:04:02.664: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:03.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:04:03.868: INFO: rc: 1 May 20 22:04:03.868: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:04.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:04:04.896: INFO: rc: 1 May 20 22:04:04.896: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:05.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:04:06.127: INFO: rc: 1 May 20 22:04:06.127: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:06.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:04:06.631: INFO: rc: 1 May 20 22:04:06.632: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:07.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:04:08.028: INFO: rc: 1 May 20 22:04:08.028: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:08.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:04:08.713: INFO: rc: 1 May 20 22:04:08.713: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:09.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:04:09.655: INFO: rc: 1 May 20 22:04:09.655: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:10.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:04:10.675: INFO: rc: 1 May 20 22:04:10.675: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:11.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:04:11.642: INFO: rc: 1 May 20 22:04:11.642: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:12.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:04:12.625: INFO: rc: 1 May 20 22:04:12.625: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:13.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:04:13.635: INFO: rc: 1 May 20 22:04:13.635: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:14.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:04:14.657: INFO: rc: 1 May 20 22:04:14.657: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:15.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:04:15.675: INFO: rc: 1 May 20 22:04:15.675: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:16.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:04:16.669: INFO: rc: 1 May 20 22:04:16.669: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:17.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:04:17.690: INFO: rc: 1 May 20 22:04:17.690: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:18.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:04:18.659: INFO: rc: 1 May 20 22:04:18.659: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:19.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:04:19.719: INFO: rc: 1 May 20 22:04:19.719: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:20.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:04:20.678: INFO: rc: 1 May 20 22:04:20.678: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:21.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:04:21.765: INFO: rc: 1 May 20 22:04:21.765: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:22.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:04:22.660: INFO: rc: 1 May 20 22:04:22.660: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:23.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:04:23.648: INFO: rc: 1 May 20 22:04:23.648: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:24.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:04:24.666: INFO: rc: 1 May 20 22:04:24.666: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:25.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:04:25.683: INFO: rc: 1 May 20 22:04:25.683: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:26.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:04:26.719: INFO: rc: 1 May 20 22:04:26.719: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:27.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:04:27.650: INFO: rc: 1 May 20 22:04:27.650: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:28.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:04:28.657: INFO: rc: 1 May 20 22:04:28.657: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:29.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:04:29.644: INFO: rc: 1 May 20 22:04:29.644: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:30.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:04:30.793: INFO: rc: 1 May 20 22:04:30.793: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:31.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:04:31.652: INFO: rc: 1 May 20 22:04:31.652: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:32.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:04:32.667: INFO: rc: 1 May 20 22:04:32.667: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:33.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:04:33.655: INFO: rc: 1 May 20 22:04:33.655: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:34.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:04:34.739: INFO: rc: 1 May 20 22:04:34.739: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:35.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:04:35.663: INFO: rc: 1 May 20 22:04:35.663: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:36.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:04:36.667: INFO: rc: 1 May 20 22:04:36.667: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:37.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:04:37.641: INFO: rc: 1 May 20 22:04:37.641: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:38.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:04:38.637: INFO: rc: 1 May 20 22:04:38.637: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:39.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:04:39.642: INFO: rc: 1 May 20 22:04:39.642: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:40.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:04:40.654: INFO: rc: 1 May 20 22:04:40.654: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:40.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260' May 20 22:04:40.869: INFO: rc: 1 May 20 22:04:40.869: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-814 exec execpod-affinitymn5hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31260: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31260 nc: connect to 10.10.190.207 port 31260 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:40.870: FAIL: Unexpected error: <*errors.errorString | 0xc0015ae920>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31260 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31260 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForSessionAffinityTimeout(0xc001649080, 0x77b33d8, 0xc000d7c580, 0xc0019b4000) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2497 +0x751 k8s.io/kubernetes/test/e2e/network.glob..func24.26() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1846 +0x9c k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0018ca900) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0018ca900) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc0018ca900, 0x70f99e8) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 May 20 22:04:40.871: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-814, will wait for the garbage collector to delete the pods May 20 22:04:40.945: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 3.448842ms May 20 22:04:41.046: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 101.092794ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-814". STEP: Found 33 events. May 20 22:04:46.865: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-timeout-kd757: { } Scheduled: Successfully assigned services-814/affinity-nodeport-timeout-kd757 to node1 May 20 22:04:46.865: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-timeout-nqwtg: { } Scheduled: Successfully assigned services-814/affinity-nodeport-timeout-nqwtg to node1 May 20 22:04:46.865: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-timeout-qmcnw: { } Scheduled: Successfully assigned services-814/affinity-nodeport-timeout-qmcnw to node2 May 20 22:04:46.865: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpod-affinitymn5hr: { } Scheduled: Successfully assigned services-814/execpod-affinitymn5hr to node1 May 20 22:04:46.865: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for kube-proxy-mode-detector: { } Scheduled: Successfully assigned services-814/kube-proxy-mode-detector to node1 May 20 22:04:46.865: INFO: At 2022-05-20 22:02:20 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 353.925876ms May 20 22:04:46.865: INFO: At 2022-05-20 22:02:20 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" May 20 22:04:46.865: INFO: At 2022-05-20 22:02:21 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node1} Created: Created container agnhost-container May 20 22:04:46.865: INFO: At 2022-05-20 22:02:21 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node1} Started: Started container agnhost-container May 20 22:04:46.865: INFO: At 2022-05-20 22:02:23 +0000 UTC - event for affinity-nodeport-timeout: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-timeout-qmcnw May 20 22:04:46.865: INFO: At 2022-05-20 22:02:23 +0000 UTC - event for affinity-nodeport-timeout: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-timeout-nqwtg May 20 22:04:46.865: INFO: At 2022-05-20 22:02:23 +0000 UTC - event for affinity-nodeport-timeout: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-timeout-kd757 May 20 22:04:46.865: INFO: At 2022-05-20 22:02:23 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node1} Killing: Stopping container agnhost-container May 20 22:04:46.865: INFO: At 2022-05-20 22:02:25 +0000 UTC - event for affinity-nodeport-timeout-kd757: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" May 20 22:04:46.865: INFO: At 2022-05-20 22:02:25 +0000 UTC - event for affinity-nodeport-timeout-nqwtg: {kubelet node1} Created: Created container affinity-nodeport-timeout May 20 22:04:46.865: INFO: At 2022-05-20 22:02:25 +0000 UTC - event for affinity-nodeport-timeout-nqwtg: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 345.070917ms May 20 22:04:46.865: INFO: At 2022-05-20 22:02:25 +0000 UTC - event for affinity-nodeport-timeout-nqwtg: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" May 20 22:04:46.866: INFO: At 2022-05-20 22:02:25 +0000 UTC - event for affinity-nodeport-timeout-qmcnw: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" May 20 22:04:46.866: INFO: At 2022-05-20 22:02:25 +0000 UTC - event for affinity-nodeport-timeout-qmcnw: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 483.626931ms May 20 22:04:46.866: INFO: At 2022-05-20 22:02:25 +0000 UTC - event for affinity-nodeport-timeout-qmcnw: {kubelet node2} Created: Created container affinity-nodeport-timeout May 20 22:04:46.866: INFO: At 2022-05-20 22:02:26 +0000 UTC - event for affinity-nodeport-timeout-kd757: {kubelet node1} Started: Started container affinity-nodeport-timeout May 20 22:04:46.866: INFO: At 2022-05-20 22:02:26 +0000 UTC - event for affinity-nodeport-timeout-kd757: {kubelet node1} Created: Created container affinity-nodeport-timeout May 20 22:04:46.866: INFO: At 2022-05-20 22:02:26 +0000 UTC - event for affinity-nodeport-timeout-kd757: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 329.293441ms May 20 22:04:46.866: INFO: At 2022-05-20 22:02:26 +0000 UTC - event for affinity-nodeport-timeout-nqwtg: {kubelet node1} Started: Started container affinity-nodeport-timeout May 20 22:04:46.866: INFO: At 2022-05-20 22:02:26 +0000 UTC - event for affinity-nodeport-timeout-qmcnw: {kubelet node2} Started: Started container affinity-nodeport-timeout May 20 22:04:46.866: INFO: At 2022-05-20 22:02:34 +0000 UTC - event for execpod-affinitymn5hr: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 313.226988ms May 20 22:04:46.866: INFO: At 2022-05-20 22:02:34 +0000 UTC - event for execpod-affinitymn5hr: {kubelet node1} Created: Created container agnhost-container May 20 22:04:46.866: INFO: At 2022-05-20 22:02:34 +0000 UTC - event for execpod-affinitymn5hr: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" May 20 22:04:46.866: INFO: At 2022-05-20 22:02:35 +0000 UTC - event for execpod-affinitymn5hr: {kubelet node1} Started: Started container agnhost-container May 20 22:04:46.866: INFO: At 2022-05-20 22:04:40 +0000 UTC - event for affinity-nodeport-timeout-kd757: {kubelet node1} Killing: Stopping container affinity-nodeport-timeout May 20 22:04:46.866: INFO: At 2022-05-20 22:04:40 +0000 UTC - event for affinity-nodeport-timeout-nqwtg: {kubelet node1} Killing: Stopping container affinity-nodeport-timeout May 20 22:04:46.866: INFO: At 2022-05-20 22:04:40 +0000 UTC - event for affinity-nodeport-timeout-qmcnw: {kubelet node2} Killing: Stopping container affinity-nodeport-timeout May 20 22:04:46.866: INFO: At 2022-05-20 22:04:40 +0000 UTC - event for execpod-affinitymn5hr: {kubelet node1} Killing: Stopping container agnhost-container May 20 22:04:46.867: INFO: POD NODE PHASE GRACE CONDITIONS May 20 22:04:46.867: INFO: May 20 22:04:46.871: INFO: Logging node info for node master1 May 20 22:04:46.873: INFO: Node Info: &Node{ObjectMeta:{master1 b016dcf2-74b7-4456-916a-8ca363b9ccc3 36800 0 2022-05-20 20:01:28 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-05-20 20:01:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2022-05-20 20:01:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2022-05-20 20:04:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2022-05-20 20:09:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {nfd-master Update v1 2022-05-20 20:12:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-20 20:07:07 +0000 UTC,LastTransitionTime:2022-05-20 20:07:07 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-20 22:04:44 +0000 UTC,LastTransitionTime:2022-05-20 20:01:26 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-20 22:04:44 +0000 UTC,LastTransitionTime:2022-05-20 20:01:26 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-20 22:04:44 +0000 UTC,LastTransitionTime:2022-05-20 20:01:26 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-20 22:04:44 +0000 UTC,LastTransitionTime:2022-05-20 20:04:22 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e9847a94929d4465bdf672fd6e82b77d,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:a01e5bd5-a73c-4ab6-b80a-cab509b05bc6,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727687197,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:f65735add9b770eec74999948d1a43963106c14a89579d0158e1ec3a1bae070e tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 20 22:04:46.874: INFO: Logging kubelet events for node master1 May 20 22:04:46.876: INFO: Logging pods the kubelet thinks is on node master1 May 20 22:04:46.907: INFO: kube-multus-ds-amd64-k8cb6 started at 2022-05-20 20:04:18 +0000 UTC (0+1 container statuses recorded) May 20 22:04:46.907: INFO: Container kube-multus ready: true, restart count 1 May 20 22:04:46.907: INFO: container-registry-65d7c44b96-n94w5 started at 2022-05-20 20:08:47 +0000 UTC (0+2 container statuses recorded) May 20 22:04:46.907: INFO: Container docker-registry ready: true, restart count 0 May 20 22:04:46.907: INFO: Container nginx ready: true, restart count 0 May 20 22:04:46.907: INFO: prometheus-operator-585ccfb458-bl62n started at 2022-05-20 20:17:13 +0000 UTC (0+2 container statuses recorded) May 20 22:04:46.907: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 20 22:04:46.907: INFO: Container prometheus-operator ready: true, restart count 0 May 20 22:04:46.907: INFO: kube-apiserver-master1 started at 2022-05-20 20:02:32 +0000 UTC (0+1 container statuses recorded) May 20 22:04:46.907: INFO: Container kube-apiserver ready: true, restart count 0 May 20 22:04:46.907: INFO: kube-controller-manager-master1 started at 2022-05-20 20:10:37 +0000 UTC (0+1 container statuses recorded) May 20 22:04:46.907: INFO: Container kube-controller-manager ready: true, restart count 3 May 20 22:04:46.907: INFO: kube-proxy-rgxh2 started at 2022-05-20 20:03:14 +0000 UTC (0+1 container statuses recorded) May 20 22:04:46.907: INFO: Container kube-proxy ready: true, restart count 2 May 20 22:04:46.907: INFO: kube-flannel-tzq8g started at 2022-05-20 20:04:10 +0000 UTC (1+1 container statuses recorded) May 20 22:04:46.908: INFO: Init container install-cni ready: true, restart count 2 May 20 22:04:46.908: INFO: Container kube-flannel ready: true, restart count 1 May 20 22:04:46.908: INFO: node-feature-discovery-controller-cff799f9f-nq7tc started at 2022-05-20 20:11:58 +0000 UTC (0+1 container statuses recorded) May 20 22:04:46.908: INFO: Container nfd-controller ready: true, restart count 0 May 20 22:04:46.908: INFO: node-exporter-4rvrg started at 2022-05-20 20:17:21 +0000 UTC (0+2 container statuses recorded) May 20 22:04:46.908: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 20 22:04:46.908: INFO: Container node-exporter ready: true, restart count 0 May 20 22:04:46.908: INFO: kube-scheduler-master1 started at 2022-05-20 20:20:27 +0000 UTC (0+1 container statuses recorded) May 20 22:04:46.908: INFO: Container kube-scheduler ready: true, restart count 1 May 20 22:04:46.998: INFO: Latency metrics for node master1 May 20 22:04:46.998: INFO: Logging node info for node master2 May 20 22:04:47.001: INFO: Node Info: &Node{ObjectMeta:{master2 ddc04b08-e43a-4e18-a612-aa3bf7f8411e 36809 0 2022-05-20 20:01:56 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-05-20 20:01:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-05-20 20:04:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-05-20 20:04:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-05-20 20:14:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-20 20:07:03 +0000 UTC,LastTransitionTime:2022-05-20 20:07:03 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-20 22:04:44 +0000 UTC,LastTransitionTime:2022-05-20 20:01:56 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-20 22:04:44 +0000 UTC,LastTransitionTime:2022-05-20 20:01:56 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-20 22:04:44 +0000 UTC,LastTransitionTime:2022-05-20 20:01:56 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-20 22:04:44 +0000 UTC,LastTransitionTime:2022-05-20 20:04:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:63d829bfe81540169bcb84ee465e884a,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:fc4aead3-0f07-477a-9f91-3902c50ddf48,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727687197,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 20 22:04:47.001: INFO: Logging kubelet events for node master2 May 20 22:04:47.006: INFO: Logging pods the kubelet thinks is on node master2 May 20 22:04:47.022: INFO: kube-apiserver-master2 started at 2022-05-20 20:02:34 +0000 UTC (0+1 container statuses recorded) May 20 22:04:47.022: INFO: Container kube-apiserver ready: true, restart count 0 May 20 22:04:47.022: INFO: kube-controller-manager-master2 started at 2022-05-20 20:10:36 +0000 UTC (0+1 container statuses recorded) May 20 22:04:47.022: INFO: Container kube-controller-manager ready: true, restart count 2 May 20 22:04:47.022: INFO: kube-proxy-wfzg2 started at 2022-05-20 20:03:14 +0000 UTC (0+1 container statuses recorded) May 20 22:04:47.022: INFO: Container kube-proxy ready: true, restart count 1 May 20 22:04:47.022: INFO: kube-flannel-wj7hl started at 2022-05-20 20:04:10 +0000 UTC (1+1 container statuses recorded) May 20 22:04:47.022: INFO: Init container install-cni ready: true, restart count 2 May 20 22:04:47.022: INFO: Container kube-flannel ready: true, restart count 1 May 20 22:04:47.022: INFO: coredns-8474476ff8-tjnfw started at 2022-05-20 20:04:46 +0000 UTC (0+1 container statuses recorded) May 20 22:04:47.022: INFO: Container coredns ready: true, restart count 1 May 20 22:04:47.022: INFO: dns-autoscaler-7df78bfcfb-5qj9t started at 2022-05-20 20:04:48 +0000 UTC (0+1 container statuses recorded) May 20 22:04:47.022: INFO: Container autoscaler ready: true, restart count 1 May 20 22:04:47.022: INFO: node-exporter-jfg4p started at 2022-05-20 20:17:20 +0000 UTC (0+2 container statuses recorded) May 20 22:04:47.022: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 20 22:04:47.022: INFO: Container node-exporter ready: true, restart count 0 May 20 22:04:47.022: INFO: kube-scheduler-master2 started at 2022-05-20 20:02:34 +0000 UTC (0+1 container statuses recorded) May 20 22:04:47.022: INFO: Container kube-scheduler ready: true, restart count 3 May 20 22:04:47.022: INFO: kube-multus-ds-amd64-97fkc started at 2022-05-20 20:04:18 +0000 UTC (0+1 container statuses recorded) May 20 22:04:47.022: INFO: Container kube-multus ready: true, restart count 1 May 20 22:04:47.108: INFO: Latency metrics for node master2 May 20 22:04:47.108: INFO: Logging node info for node master3 May 20 22:04:47.110: INFO: Node Info: &Node{ObjectMeta:{master3 f42c1bd6-d828-4857-9180-56c73dcc370f 36848 0 2022-05-20 20:02:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-05-20 20:02:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-05-20 20:04:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-05-20 20:04:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-05-20 20:14:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-20 20:07:09 +0000 UTC,LastTransitionTime:2022-05-20 20:07:09 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-20 22:04:45 +0000 UTC,LastTransitionTime:2022-05-20 20:02:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-20 22:04:45 +0000 UTC,LastTransitionTime:2022-05-20 20:02:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-20 22:04:45 +0000 UTC,LastTransitionTime:2022-05-20 20:02:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-20 22:04:45 +0000 UTC,LastTransitionTime:2022-05-20 20:04:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6a2131d65a6f41c3b857ed7d5f7d9f9f,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:2fa6d1c6-058c-482a-97f3-d7e9e817b36a,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727687197,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 20 22:04:47.111: INFO: Logging kubelet events for node master3 May 20 22:04:47.113: INFO: Logging pods the kubelet thinks is on node master3 May 20 22:04:47.125: INFO: kube-multus-ds-amd64-ch8bd started at 2022-05-20 20:04:18 +0000 UTC (0+1 container statuses recorded) May 20 22:04:47.125: INFO: Container kube-multus ready: true, restart count 1 May 20 22:04:47.125: INFO: coredns-8474476ff8-4szxh started at 2022-05-20 20:04:50 +0000 UTC (0+1 container statuses recorded) May 20 22:04:47.125: INFO: Container coredns ready: true, restart count 1 May 20 22:04:47.125: INFO: node-exporter-zgxkr started at 2022-05-20 20:17:20 +0000 UTC (0+2 container statuses recorded) May 20 22:04:47.125: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 20 22:04:47.125: INFO: Container node-exporter ready: true, restart count 0 May 20 22:04:47.125: INFO: kube-apiserver-master3 started at 2022-05-20 20:02:05 +0000 UTC (0+1 container statuses recorded) May 20 22:04:47.125: INFO: Container kube-apiserver ready: true, restart count 0 May 20 22:04:47.125: INFO: kube-scheduler-master3 started at 2022-05-20 20:02:33 +0000 UTC (0+1 container statuses recorded) May 20 22:04:47.125: INFO: Container kube-scheduler ready: true, restart count 2 May 20 22:04:47.125: INFO: kube-proxy-rsqzq started at 2022-05-20 20:03:14 +0000 UTC (0+1 container statuses recorded) May 20 22:04:47.125: INFO: Container kube-proxy ready: true, restart count 2 May 20 22:04:47.125: INFO: kube-flannel-bwb5w started at 2022-05-20 20:04:10 +0000 UTC (1+1 container statuses recorded) May 20 22:04:47.125: INFO: Init container install-cni ready: true, restart count 0 May 20 22:04:47.125: INFO: Container kube-flannel ready: true, restart count 2 May 20 22:04:47.125: INFO: kube-controller-manager-master3 started at 2022-05-20 20:10:36 +0000 UTC (0+1 container statuses recorded) May 20 22:04:47.125: INFO: Container kube-controller-manager ready: true, restart count 1 May 20 22:04:47.205: INFO: Latency metrics for node master3 May 20 22:04:47.205: INFO: Logging node info for node node1 May 20 22:04:47.208: INFO: Node Info: &Node{ObjectMeta:{node1 65c381dd-b6f5-4e67-a327-7a45366d15af 36146 0 2022-05-20 20:03:10 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-20 20:03:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-05-20 20:03:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-05-20 20:04:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-05-20 20:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-05-20 20:15:36 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-05-20 20:15:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-20 20:07:03 +0000 UTC,LastTransitionTime:2022-05-20 20:07:03 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-20 22:04:38 +0000 UTC,LastTransitionTime:2022-05-20 20:03:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-20 22:04:38 +0000 UTC,LastTransitionTime:2022-05-20 20:03:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-20 22:04:38 +0000 UTC,LastTransitionTime:2022-05-20 20:03:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-20 22:04:38 +0000 UTC,LastTransitionTime:2022-05-20 20:04:16 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f2f0a31e38e446cda6cf4c679d8a2ef5,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:c988afd2-8149-4515-9a6f-832552c2ed2d,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003977757,},ContainerImage{Names:[localhost:30500/cmk@sha256:1b6fdb10d02a95904d28fbec7317b3044b913b4572405caf5a5b4f305481ce37 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727687197,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:bcea5fd975bec7f8eb179f896b3a007090d081bd13d974bdb01eedd94cdd88b1 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 20 22:04:47.209: INFO: Logging kubelet events for node node1 May 20 22:04:47.211: INFO: Logging pods the kubelet thinks is on node node1 May 20 22:04:47.223: INFO: cmk-c5x47 started at 2022-05-20 20:16:15 +0000 UTC (0+2 container statuses recorded) May 20 22:04:47.223: INFO: Container nodereport ready: true, restart count 0 May 20 22:04:47.223: INFO: Container reconcile ready: true, restart count 0 May 20 22:04:47.223: INFO: collectd-875j8 started at 2022-05-20 20:21:17 +0000 UTC (0+3 container statuses recorded) May 20 22:04:47.223: INFO: Container collectd ready: true, restart count 0 May 20 22:04:47.223: INFO: Container collectd-exporter ready: true, restart count 0 May 20 22:04:47.223: INFO: Container rbac-proxy ready: true, restart count 0 May 20 22:04:47.223: INFO: kube-proxy-v8kzq started at 2022-05-20 20:03:14 +0000 UTC (0+1 container statuses recorded) May 20 22:04:47.223: INFO: Container kube-proxy ready: true, restart count 2 May 20 22:04:47.223: INFO: kubernetes-dashboard-785dcbb76d-6c2f8 started at 2022-05-20 20:04:50 +0000 UTC (0+1 container statuses recorded) May 20 22:04:47.223: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 20 22:04:47.223: INFO: node-feature-discovery-worker-rh55h started at 2022-05-20 20:11:58 +0000 UTC (0+1 container statuses recorded) May 20 22:04:47.223: INFO: Container nfd-worker ready: true, restart count 0 May 20 22:04:47.223: INFO: cmk-init-discover-node1-vkzkd started at 2022-05-20 20:15:33 +0000 UTC (0+3 container statuses recorded) May 20 22:04:47.223: INFO: Container discover ready: false, restart count 0 May 20 22:04:47.223: INFO: Container init ready: false, restart count 0 May 20 22:04:47.223: INFO: Container install ready: false, restart count 0 May 20 22:04:47.223: INFO: test-rollover-deployment-98c5f4599-97d4m started at 2022-05-20 22:04:46 +0000 UTC (0+1 container statuses recorded) May 20 22:04:47.223: INFO: Container agnhost ready: false, restart count 0 May 20 22:04:47.223: INFO: kube-multus-ds-amd64-krd6m started at 2022-05-20 20:04:18 +0000 UTC (0+1 container statuses recorded) May 20 22:04:47.223: INFO: Container kube-multus ready: true, restart count 1 May 20 22:04:47.223: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-qn9gl started at 2022-05-20 20:13:08 +0000 UTC (0+1 container statuses recorded) May 20 22:04:47.223: INFO: Container kube-sriovdp ready: true, restart count 0 May 20 22:04:47.223: INFO: node-exporter-czwvh started at 2022-05-20 20:17:20 +0000 UTC (0+2 container statuses recorded) May 20 22:04:47.223: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 20 22:04:47.223: INFO: Container node-exporter ready: true, restart count 0 May 20 22:04:47.223: INFO: busybox-f53e4789-dd1e-4225-b414-0de17c36b8d8 started at 2022-05-20 22:03:16 +0000 UTC (0+1 container statuses recorded) May 20 22:04:47.223: INFO: Container busybox ready: true, restart count 0 May 20 22:04:47.223: INFO: kube-flannel-2blt7 started at 2022-05-20 20:04:10 +0000 UTC (1+1 container statuses recorded) May 20 22:04:47.223: INFO: Init container install-cni ready: true, restart count 2 May 20 22:04:47.223: INFO: Container kube-flannel ready: true, restart count 3 May 20 22:04:47.223: INFO: prometheus-k8s-0 started at 2022-05-20 20:17:30 +0000 UTC (0+4 container statuses recorded) May 20 22:04:47.224: INFO: Container config-reloader ready: true, restart count 0 May 20 22:04:47.224: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 20 22:04:47.224: INFO: Container grafana ready: true, restart count 0 May 20 22:04:47.224: INFO: Container prometheus ready: true, restart count 1 May 20 22:04:47.224: INFO: externalname-service-wmrc8 started at 2022-05-20 22:03:10 +0000 UTC (0+1 container statuses recorded) May 20 22:04:47.224: INFO: Container externalname-service ready: true, restart count 0 May 20 22:04:47.224: INFO: nginx-proxy-node1 started at 2022-05-20 20:06:57 +0000 UTC (0+1 container statuses recorded) May 20 22:04:47.224: INFO: Container nginx-proxy ready: true, restart count 2 May 20 22:04:47.423: INFO: Latency metrics for node node1 May 20 22:04:47.423: INFO: Logging node info for node node2 May 20 22:04:47.426: INFO: Node Info: &Node{ObjectMeta:{node2 a0e0a426-876d-4419-96e4-c6977ef3393c 36957 0 2022-05-20 20:03:09 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-20 20:03:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-05-20 20:03:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-05-20 20:04:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-05-20 20:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-05-20 20:15:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-05-20 20:16:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-20 20:07:03 +0000 UTC,LastTransitionTime:2022-05-20 20:07:03 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-20 22:04:45 +0000 UTC,LastTransitionTime:2022-05-20 20:03:09 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-20 22:04:45 +0000 UTC,LastTransitionTime:2022-05-20 20:03:09 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-20 22:04:45 +0000 UTC,LastTransitionTime:2022-05-20 20:03:09 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-20 22:04:45 +0000 UTC,LastTransitionTime:2022-05-20 20:07:08 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a6deb87c5d6d4ca89be50c8f447a0e3c,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:67af2183-25fe-4024-95ea-e80edf7c8695,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727687197,},ContainerImage{Names:[localhost:30500/cmk@sha256:1b6fdb10d02a95904d28fbec7317b3044b913b4572405caf5a5b4f305481ce37 localhost:30500/cmk:v1.5.1],SizeBytes:727687197,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:bcea5fd975bec7f8eb179f896b3a007090d081bd13d974bdb01eedd94cdd88b1 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:f65735add9b770eec74999948d1a43963106c14a89579d0158e1ec3a1bae070e localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 20 22:04:47.427: INFO: Logging kubelet events for node node2 May 20 22:04:47.432: INFO: Logging pods the kubelet thinks is on node node2 May 20 22:04:47.445: INFO: node-feature-discovery-worker-nphk9 started at 2022-05-20 20:11:58 +0000 UTC (0+1 container statuses recorded) May 20 22:04:47.445: INFO: Container nfd-worker ready: true, restart count 0 May 20 22:04:47.445: INFO: externalname-service-hbmm6 started at 2022-05-20 22:03:10 +0000 UTC (0+1 container statuses recorded) May 20 22:04:47.445: INFO: Container externalname-service ready: true, restart count 0 May 20 22:04:47.445: INFO: pod2 started at 2022-05-20 22:04:31 +0000 UTC (0+1 container statuses recorded) May 20 22:04:47.445: INFO: Container container1 ready: true, restart count 0 May 20 22:04:47.445: INFO: test-rollover-controller-lszgx started at 2022-05-20 22:04:36 +0000 UTC (0+1 container statuses recorded) May 20 22:04:47.445: INFO: Container httpd ready: true, restart count 0 May 20 22:04:47.445: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-wl7nk started at 2022-05-20 20:13:08 +0000 UTC (0+1 container statuses recorded) May 20 22:04:47.445: INFO: Container kube-sriovdp ready: true, restart count 0 May 20 22:04:47.445: INFO: cmk-9hxtl started at 2022-05-20 20:16:16 +0000 UTC (0+2 container statuses recorded) May 20 22:04:47.445: INFO: Container nodereport ready: true, restart count 0 May 20 22:04:47.445: INFO: Container reconcile ready: true, restart count 0 May 20 22:04:47.445: INFO: node-exporter-vm24n started at 2022-05-20 20:17:20 +0000 UTC (0+2 container statuses recorded) May 20 22:04:47.445: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 20 22:04:47.445: INFO: Container node-exporter ready: true, restart count 0 May 20 22:04:47.445: INFO: var-expansion-68bb6ed1-d879-453d-ad0d-f4dc0648d227 started at 2022-05-20 22:02:48 +0000 UTC (0+1 container statuses recorded) May 20 22:04:47.445: INFO: Container dapi-container ready: false, restart count 0 May 20 22:04:47.445: INFO: execpod5j4kg started at 2022-05-20 22:03:22 +0000 UTC (0+1 container statuses recorded) May 20 22:04:47.445: INFO: Container agnhost-container ready: true, restart count 0 May 20 22:04:47.445: INFO: cmk-webhook-6c9d5f8578-5kbbc started at 2022-05-20 20:16:16 +0000 UTC (0+1 container statuses recorded) May 20 22:04:47.445: INFO: Container cmk-webhook ready: true, restart count 0 May 20 22:04:47.445: INFO: svc-latency-rc-jt5bm started at 2022-05-20 22:04:35 +0000 UTC (0+1 container statuses recorded) May 20 22:04:47.445: INFO: Container svc-latency-rc ready: true, restart count 0 May 20 22:04:47.445: INFO: kube-multus-ds-amd64-p22zp started at 2022-05-20 20:04:18 +0000 UTC (0+1 container statuses recorded) May 20 22:04:47.445: INFO: Container kube-multus ready: true, restart count 1 May 20 22:04:47.445: INFO: kubernetes-metrics-scraper-5558854cb-66r9g started at 2022-05-20 20:04:50 +0000 UTC (0+1 container statuses recorded) May 20 22:04:47.445: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 20 22:04:47.445: INFO: tas-telemetry-aware-scheduling-84ff454dfb-ddzzd started at 2022-05-20 20:20:26 +0000 UTC (0+1 container statuses recorded) May 20 22:04:47.445: INFO: Container tas-extender ready: true, restart count 0 May 20 22:04:47.445: INFO: cmk-init-discover-node2-b7gw4 started at 2022-05-20 20:15:53 +0000 UTC (0+3 container statuses recorded) May 20 22:04:47.445: INFO: Container discover ready: false, restart count 0 May 20 22:04:47.445: INFO: Container init ready: false, restart count 0 May 20 22:04:47.445: INFO: Container install ready: false, restart count 0 May 20 22:04:47.445: INFO: collectd-h4pzk started at 2022-05-20 20:21:17 +0000 UTC (0+3 container statuses recorded) May 20 22:04:47.445: INFO: Container collectd ready: true, restart count 0 May 20 22:04:47.445: INFO: Container collectd-exporter ready: true, restart count 0 May 20 22:04:47.445: INFO: Container rbac-proxy ready: true, restart count 0 May 20 22:04:47.445: INFO: pod-logs-websocket-8422e920-378c-4718-b108-aabc19a95bfc started at 2022-05-20 22:04:25 +0000 UTC (0+1 container statuses recorded) May 20 22:04:47.445: INFO: Container main ready: true, restart count 0 May 20 22:04:47.445: INFO: ss-0 started at 2022-05-20 22:04:30 +0000 UTC (0+1 container statuses recorded) May 20 22:04:47.445: INFO: Container webserver ready: false, restart count 0 May 20 22:04:47.445: INFO: pod1 started at 2022-05-20 22:04:31 +0000 UTC (0+1 container statuses recorded) May 20 22:04:47.445: INFO: Container container1 ready: true, restart count 0 May 20 22:04:47.445: INFO: nginx-proxy-node2 started at 2022-05-20 20:03:09 +0000 UTC (0+1 container statuses recorded) May 20 22:04:47.445: INFO: Container nginx-proxy ready: true, restart count 2 May 20 22:04:47.445: INFO: kube-proxy-rg2fp started at 2022-05-20 20:03:14 +0000 UTC (0+1 container statuses recorded) May 20 22:04:47.445: INFO: Container kube-proxy ready: true, restart count 2 May 20 22:04:47.445: INFO: kube-flannel-jpmpd started at 2022-05-20 20:04:10 +0000 UTC (1+1 container statuses recorded) May 20 22:04:47.445: INFO: Init container install-cni ready: true, restart count 1 May 20 22:04:47.445: INFO: Container kube-flannel ready: true, restart count 2 May 20 22:04:47.717: INFO: Latency metrics for node node2 May 20 22:04:47.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-814" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [148.602 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 20 22:04:40.870: Unexpected error: <*errors.errorString | 0xc0015ae920>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31260 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31260 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2497 ------------------------------ {"msg":"FAILED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":3,"skipped":51,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} S ------------------------------ [BeforeEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:04:35.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 20 22:04:35.891: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-4554 I0520 22:04:35.911237 38 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-4554, replica count: 1 I0520 22:04:36.962689 38 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0520 22:04:37.963567 38 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0520 22:04:38.964124 38 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 20 22:04:39.074: INFO: Created: latency-svc-cwpb9 May 20 22:04:39.080: INFO: Got endpoints: latency-svc-cwpb9 [15.46591ms] May 20 22:04:39.087: INFO: Created: latency-svc-f9q9w May 20 22:04:39.089: INFO: Got endpoints: latency-svc-f9q9w [8.506964ms] May 20 22:04:39.090: INFO: Created: latency-svc-m29ch May 20 22:04:39.092: INFO: Got endpoints: latency-svc-m29ch [11.877205ms] May 20 22:04:39.093: INFO: Created: latency-svc-fljxt May 20 22:04:39.096: INFO: Got endpoints: latency-svc-fljxt [15.033015ms] May 20 22:04:39.096: INFO: Created: latency-svc-cqdr8 May 20 22:04:39.098: INFO: Got endpoints: latency-svc-cqdr8 [17.288549ms] May 20 22:04:39.099: INFO: Created: latency-svc-ssgjb May 20 22:04:39.100: INFO: Got endpoints: latency-svc-ssgjb [19.503492ms] May 20 22:04:39.101: INFO: Created: latency-svc-bgmz6 May 20 22:04:39.104: INFO: Got endpoints: latency-svc-bgmz6 [22.885159ms] May 20 22:04:39.104: INFO: Created: latency-svc-dfnf6 May 20 22:04:39.106: INFO: Got endpoints: latency-svc-dfnf6 [25.270237ms] May 20 22:04:39.107: INFO: Created: latency-svc-dwm5x May 20 22:04:39.109: INFO: Got endpoints: latency-svc-dwm5x [28.267233ms] May 20 22:04:39.112: INFO: Created: latency-svc-v2h5m May 20 22:04:39.112: INFO: Created: latency-svc-4rrng May 20 22:04:39.115: INFO: Got endpoints: latency-svc-v2h5m [33.968627ms] May 20 22:04:39.115: INFO: Got endpoints: latency-svc-4rrng [34.504292ms] May 20 22:04:39.116: INFO: Created: latency-svc-hcwn6 May 20 22:04:39.118: INFO: Created: latency-svc-6hvr4 May 20 22:04:39.118: INFO: Got endpoints: latency-svc-hcwn6 [37.505237ms] May 20 22:04:39.121: INFO: Created: latency-svc-7j9rd May 20 22:04:39.121: INFO: Got endpoints: latency-svc-6hvr4 [39.998834ms] May 20 22:04:39.123: INFO: Got endpoints: latency-svc-7j9rd [41.944798ms] May 20 22:04:39.125: INFO: Created: latency-svc-nqnf8 May 20 22:04:39.126: INFO: Got endpoints: latency-svc-nqnf8 [45.524731ms] May 20 22:04:39.127: INFO: Created: latency-svc-xftv2 May 20 22:04:39.129: INFO: Got endpoints: latency-svc-xftv2 [48.093819ms] May 20 22:04:39.130: INFO: Created: latency-svc-cx5xl May 20 22:04:39.132: INFO: Got endpoints: latency-svc-cx5xl [43.209302ms] May 20 22:04:39.133: INFO: Created: latency-svc-8z9r6 May 20 22:04:39.136: INFO: Got endpoints: latency-svc-8z9r6 [43.189626ms] May 20 22:04:39.136: INFO: Created: latency-svc-qppvr May 20 22:04:39.138: INFO: Got endpoints: latency-svc-qppvr [42.832978ms] May 20 22:04:39.139: INFO: Created: latency-svc-7cffd May 20 22:04:39.141: INFO: Got endpoints: latency-svc-7cffd [42.73082ms] May 20 22:04:39.144: INFO: Created: latency-svc-xx8h8 May 20 22:04:39.145: INFO: Got endpoints: latency-svc-xx8h8 [45.08903ms] May 20 22:04:39.147: INFO: Created: latency-svc-wm5jz May 20 22:04:39.149: INFO: Got endpoints: latency-svc-wm5jz [45.330152ms] May 20 22:04:39.150: INFO: Created: latency-svc-zfzvd May 20 22:04:39.152: INFO: Got endpoints: latency-svc-zfzvd [45.351171ms] May 20 22:04:39.153: INFO: Created: latency-svc-x9vdp May 20 22:04:39.154: INFO: Got endpoints: latency-svc-x9vdp [44.933483ms] May 20 22:04:39.155: INFO: Created: latency-svc-5lc9s May 20 22:04:39.157: INFO: Got endpoints: latency-svc-5lc9s [42.201007ms] May 20 22:04:39.157: INFO: Created: latency-svc-qrnbp May 20 22:04:39.160: INFO: Got endpoints: latency-svc-qrnbp [44.122348ms] May 20 22:04:39.161: INFO: Created: latency-svc-zcfhc May 20 22:04:39.163: INFO: Got endpoints: latency-svc-zcfhc [44.845507ms] May 20 22:04:39.164: INFO: Created: latency-svc-wzz6t May 20 22:04:39.166: INFO: Got endpoints: latency-svc-wzz6t [45.100412ms] May 20 22:04:39.166: INFO: Created: latency-svc-gvfpj May 20 22:04:39.169: INFO: Got endpoints: latency-svc-gvfpj [45.427721ms] May 20 22:04:39.169: INFO: Created: latency-svc-dlz6b May 20 22:04:39.171: INFO: Got endpoints: latency-svc-dlz6b [44.810988ms] May 20 22:04:39.172: INFO: Created: latency-svc-stn7h May 20 22:04:39.174: INFO: Got endpoints: latency-svc-stn7h [44.595547ms] May 20 22:04:39.174: INFO: Created: latency-svc-htm4c May 20 22:04:39.176: INFO: Got endpoints: latency-svc-htm4c [43.954348ms] May 20 22:04:39.177: INFO: Created: latency-svc-m8bhh May 20 22:04:39.179: INFO: Created: latency-svc-rf2cf May 20 22:04:39.182: INFO: Created: latency-svc-fpm2l May 20 22:04:39.185: INFO: Created: latency-svc-rkqzq May 20 22:04:39.187: INFO: Created: latency-svc-vlv5b May 20 22:04:39.189: INFO: Created: latency-svc-79drm May 20 22:04:39.192: INFO: Created: latency-svc-d68rm May 20 22:04:39.195: INFO: Created: latency-svc-bw7qt May 20 22:04:39.198: INFO: Created: latency-svc-5xhgk May 20 22:04:39.201: INFO: Created: latency-svc-sdv9b May 20 22:04:39.204: INFO: Created: latency-svc-dp6jj May 20 22:04:39.206: INFO: Created: latency-svc-dqn49 May 20 22:04:39.209: INFO: Created: latency-svc-wtj8x May 20 22:04:39.212: INFO: Created: latency-svc-67shh May 20 22:04:39.214: INFO: Created: latency-svc-5vbsq May 20 22:04:39.227: INFO: Got endpoints: latency-svc-m8bhh [91.585913ms] May 20 22:04:39.233: INFO: Created: latency-svc-mrmjz May 20 22:04:39.277: INFO: Got endpoints: latency-svc-rf2cf [138.458421ms] May 20 22:04:39.282: INFO: Created: latency-svc-sfr4q May 20 22:04:39.328: INFO: Got endpoints: latency-svc-fpm2l [186.787891ms] May 20 22:04:39.333: INFO: Created: latency-svc-4jlvd May 20 22:04:39.378: INFO: Got endpoints: latency-svc-rkqzq [232.114ms] May 20 22:04:39.383: INFO: Created: latency-svc-6b64c May 20 22:04:39.428: INFO: Got endpoints: latency-svc-vlv5b [278.87907ms] May 20 22:04:39.436: INFO: Created: latency-svc-frzwl May 20 22:04:39.478: INFO: Got endpoints: latency-svc-79drm [326.145532ms] May 20 22:04:39.483: INFO: Created: latency-svc-c8ptb May 20 22:04:39.527: INFO: Got endpoints: latency-svc-d68rm [372.968814ms] May 20 22:04:39.533: INFO: Created: latency-svc-qxnng May 20 22:04:39.578: INFO: Got endpoints: latency-svc-bw7qt [420.710707ms] May 20 22:04:39.583: INFO: Created: latency-svc-9x9dq May 20 22:04:39.627: INFO: Got endpoints: latency-svc-5xhgk [467.087375ms] May 20 22:04:39.633: INFO: Created: latency-svc-v9kwv May 20 22:04:39.677: INFO: Got endpoints: latency-svc-sdv9b [514.094077ms] May 20 22:04:39.683: INFO: Created: latency-svc-hx867 May 20 22:04:39.727: INFO: Got endpoints: latency-svc-dp6jj [561.293201ms] May 20 22:04:39.733: INFO: Created: latency-svc-jw8z8 May 20 22:04:39.777: INFO: Got endpoints: latency-svc-dqn49 [608.529763ms] May 20 22:04:39.783: INFO: Created: latency-svc-wzxvn May 20 22:04:39.827: INFO: Got endpoints: latency-svc-wtj8x [655.857069ms] May 20 22:04:39.833: INFO: Created: latency-svc-cgch8 May 20 22:04:39.877: INFO: Got endpoints: latency-svc-67shh [703.224095ms] May 20 22:04:39.882: INFO: Created: latency-svc-ffx6z May 20 22:04:39.928: INFO: Got endpoints: latency-svc-5vbsq [751.738406ms] May 20 22:04:39.933: INFO: Created: latency-svc-lvwl5 May 20 22:04:39.977: INFO: Got endpoints: latency-svc-mrmjz [749.826326ms] May 20 22:04:39.985: INFO: Created: latency-svc-t7rh6 May 20 22:04:40.027: INFO: Got endpoints: latency-svc-sfr4q [750.264301ms] May 20 22:04:40.033: INFO: Created: latency-svc-98xm4 May 20 22:04:40.077: INFO: Got endpoints: latency-svc-4jlvd [749.557078ms] May 20 22:04:40.082: INFO: Created: latency-svc-rjl4k May 20 22:04:40.128: INFO: Got endpoints: latency-svc-6b64c [750.080044ms] May 20 22:04:40.133: INFO: Created: latency-svc-dq66d May 20 22:04:40.178: INFO: Got endpoints: latency-svc-frzwl [749.483276ms] May 20 22:04:40.184: INFO: Created: latency-svc-r6kbn May 20 22:04:40.227: INFO: Got endpoints: latency-svc-c8ptb [749.645892ms] May 20 22:04:40.233: INFO: Created: latency-svc-4gc79 May 20 22:04:40.278: INFO: Got endpoints: latency-svc-qxnng [750.651926ms] May 20 22:04:40.284: INFO: Created: latency-svc-px6nk May 20 22:04:40.328: INFO: Got endpoints: latency-svc-9x9dq [749.937083ms] May 20 22:04:40.334: INFO: Created: latency-svc-l5h27 May 20 22:04:40.378: INFO: Got endpoints: latency-svc-v9kwv [750.94758ms] May 20 22:04:40.383: INFO: Created: latency-svc-2crlx May 20 22:04:40.428: INFO: Got endpoints: latency-svc-hx867 [750.110691ms] May 20 22:04:40.434: INFO: Created: latency-svc-4kqdq May 20 22:04:40.480: INFO: Got endpoints: latency-svc-jw8z8 [753.191606ms] May 20 22:04:40.486: INFO: Created: latency-svc-8zc4w May 20 22:04:40.527: INFO: Got endpoints: latency-svc-wzxvn [749.932594ms] May 20 22:04:40.532: INFO: Created: latency-svc-z6zzl May 20 22:04:40.578: INFO: Got endpoints: latency-svc-cgch8 [750.439094ms] May 20 22:04:40.584: INFO: Created: latency-svc-24qd7 May 20 22:04:40.654: INFO: Got endpoints: latency-svc-ffx6z [777.039412ms] May 20 22:04:40.660: INFO: Created: latency-svc-5ngm8 May 20 22:04:40.677: INFO: Got endpoints: latency-svc-lvwl5 [748.53192ms] May 20 22:04:40.682: INFO: Created: latency-svc-qw42q May 20 22:04:40.727: INFO: Got endpoints: latency-svc-t7rh6 [749.327374ms] May 20 22:04:40.733: INFO: Created: latency-svc-lxn7t May 20 22:04:40.776: INFO: Got endpoints: latency-svc-98xm4 [748.86802ms] May 20 22:04:40.782: INFO: Created: latency-svc-7xxcj May 20 22:04:40.828: INFO: Got endpoints: latency-svc-rjl4k [750.414702ms] May 20 22:04:40.833: INFO: Created: latency-svc-nvgdx May 20 22:04:40.877: INFO: Got endpoints: latency-svc-dq66d [749.553482ms] May 20 22:04:40.883: INFO: Created: latency-svc-9hf8f May 20 22:04:40.927: INFO: Got endpoints: latency-svc-r6kbn [749.276596ms] May 20 22:04:40.932: INFO: Created: latency-svc-8l7rg May 20 22:04:40.977: INFO: Got endpoints: latency-svc-4gc79 [749.344206ms] May 20 22:04:40.983: INFO: Created: latency-svc-p6q25 May 20 22:04:41.027: INFO: Got endpoints: latency-svc-px6nk [749.354419ms] May 20 22:04:41.034: INFO: Created: latency-svc-jmhpc May 20 22:04:41.078: INFO: Got endpoints: latency-svc-l5h27 [749.776924ms] May 20 22:04:41.083: INFO: Created: latency-svc-vsrvm May 20 22:04:41.127: INFO: Got endpoints: latency-svc-2crlx [749.392625ms] May 20 22:04:41.133: INFO: Created: latency-svc-m5gff May 20 22:04:41.179: INFO: Got endpoints: latency-svc-4kqdq [750.866734ms] May 20 22:04:41.185: INFO: Created: latency-svc-krhj8 May 20 22:04:41.227: INFO: Got endpoints: latency-svc-8zc4w [746.144013ms] May 20 22:04:41.232: INFO: Created: latency-svc-dc86s May 20 22:04:41.277: INFO: Got endpoints: latency-svc-z6zzl [749.971082ms] May 20 22:04:41.284: INFO: Created: latency-svc-r4sfj May 20 22:04:41.328: INFO: Got endpoints: latency-svc-24qd7 [750.103072ms] May 20 22:04:41.333: INFO: Created: latency-svc-cwbd7 May 20 22:04:41.377: INFO: Got endpoints: latency-svc-5ngm8 [723.050881ms] May 20 22:04:41.383: INFO: Created: latency-svc-j75kc May 20 22:04:41.428: INFO: Got endpoints: latency-svc-qw42q [751.116408ms] May 20 22:04:41.433: INFO: Created: latency-svc-dnjmv May 20 22:04:41.477: INFO: Got endpoints: latency-svc-lxn7t [750.590138ms] May 20 22:04:41.483: INFO: Created: latency-svc-j4slg May 20 22:04:41.528: INFO: Got endpoints: latency-svc-7xxcj [751.650779ms] May 20 22:04:41.534: INFO: Created: latency-svc-7hmcw May 20 22:04:41.577: INFO: Got endpoints: latency-svc-nvgdx [749.77051ms] May 20 22:04:41.583: INFO: Created: latency-svc-hhwv8 May 20 22:04:41.629: INFO: Got endpoints: latency-svc-9hf8f [751.271101ms] May 20 22:04:41.634: INFO: Created: latency-svc-8hn8f May 20 22:04:41.678: INFO: Got endpoints: latency-svc-8l7rg [750.747275ms] May 20 22:04:41.683: INFO: Created: latency-svc-4rjsf May 20 22:04:41.777: INFO: Got endpoints: latency-svc-p6q25 [800.160696ms] May 20 22:04:41.782: INFO: Created: latency-svc-j58zv May 20 22:04:41.877: INFO: Got endpoints: latency-svc-jmhpc [849.360685ms] May 20 22:04:41.892: INFO: Created: latency-svc-wlkwg May 20 22:04:41.927: INFO: Got endpoints: latency-svc-vsrvm [849.595543ms] May 20 22:04:41.932: INFO: Created: latency-svc-wkwhm May 20 22:04:41.978: INFO: Got endpoints: latency-svc-m5gff [850.582252ms] May 20 22:04:41.986: INFO: Created: latency-svc-4zmf4 May 20 22:04:42.027: INFO: Got endpoints: latency-svc-krhj8 [848.284805ms] May 20 22:04:42.033: INFO: Created: latency-svc-77bh2 May 20 22:04:42.077: INFO: Got endpoints: latency-svc-dc86s [850.458478ms] May 20 22:04:42.083: INFO: Created: latency-svc-7mr5j May 20 22:04:42.128: INFO: Got endpoints: latency-svc-r4sfj [850.755118ms] May 20 22:04:42.133: INFO: Created: latency-svc-qdnf8 May 20 22:04:42.177: INFO: Got endpoints: latency-svc-cwbd7 [849.170547ms] May 20 22:04:42.183: INFO: Created: latency-svc-jz7sj May 20 22:04:42.228: INFO: Got endpoints: latency-svc-j75kc [850.869812ms] May 20 22:04:42.234: INFO: Created: latency-svc-5r7kp May 20 22:04:42.278: INFO: Got endpoints: latency-svc-dnjmv [850.383645ms] May 20 22:04:42.283: INFO: Created: latency-svc-jkfhz May 20 22:04:42.327: INFO: Got endpoints: latency-svc-j4slg [849.737445ms] May 20 22:04:42.334: INFO: Created: latency-svc-tmx2n May 20 22:04:42.378: INFO: Got endpoints: latency-svc-7hmcw [850.141781ms] May 20 22:04:42.384: INFO: Created: latency-svc-f98xx May 20 22:04:42.427: INFO: Got endpoints: latency-svc-hhwv8 [849.833709ms] May 20 22:04:42.432: INFO: Created: latency-svc-p6lg5 May 20 22:04:42.478: INFO: Got endpoints: latency-svc-8hn8f [849.209418ms] May 20 22:04:42.483: INFO: Created: latency-svc-x7mxx May 20 22:04:42.529: INFO: Got endpoints: latency-svc-4rjsf [851.60953ms] May 20 22:04:42.537: INFO: Created: latency-svc-kwgnz May 20 22:04:42.578: INFO: Got endpoints: latency-svc-j58zv [800.656013ms] May 20 22:04:42.584: INFO: Created: latency-svc-r4lj2 May 20 22:04:42.628: INFO: Got endpoints: latency-svc-wlkwg [751.466239ms] May 20 22:04:42.635: INFO: Created: latency-svc-nkwcx May 20 22:04:42.677: INFO: Got endpoints: latency-svc-wkwhm [749.990048ms] May 20 22:04:42.684: INFO: Created: latency-svc-g8bzh May 20 22:04:42.728: INFO: Got endpoints: latency-svc-4zmf4 [750.050422ms] May 20 22:04:42.734: INFO: Created: latency-svc-mlkgx May 20 22:04:42.777: INFO: Got endpoints: latency-svc-77bh2 [749.982046ms] May 20 22:04:42.783: INFO: Created: latency-svc-5zztm May 20 22:04:42.828: INFO: Got endpoints: latency-svc-7mr5j [750.550548ms] May 20 22:04:42.834: INFO: Created: latency-svc-xtmhx May 20 22:04:42.877: INFO: Got endpoints: latency-svc-qdnf8 [749.049807ms] May 20 22:04:42.884: INFO: Created: latency-svc-b97pz May 20 22:04:42.928: INFO: Got endpoints: latency-svc-jz7sj [750.463146ms] May 20 22:04:42.933: INFO: Created: latency-svc-wl6lw May 20 22:04:42.978: INFO: Got endpoints: latency-svc-5r7kp [749.919015ms] May 20 22:04:42.984: INFO: Created: latency-svc-ch96v May 20 22:04:43.027: INFO: Got endpoints: latency-svc-jkfhz [749.235335ms] May 20 22:04:43.033: INFO: Created: latency-svc-8qwrs May 20 22:04:43.077: INFO: Got endpoints: latency-svc-tmx2n [749.609736ms] May 20 22:04:43.082: INFO: Created: latency-svc-zm7sk May 20 22:04:43.127: INFO: Got endpoints: latency-svc-f98xx [749.288296ms] May 20 22:04:43.133: INFO: Created: latency-svc-jbdw6 May 20 22:04:43.177: INFO: Got endpoints: latency-svc-p6lg5 [749.714899ms] May 20 22:04:43.183: INFO: Created: latency-svc-kl6mg May 20 22:04:43.228: INFO: Got endpoints: latency-svc-x7mxx [749.854088ms] May 20 22:04:43.234: INFO: Created: latency-svc-lh7kv May 20 22:04:43.278: INFO: Got endpoints: latency-svc-kwgnz [748.130451ms] May 20 22:04:43.283: INFO: Created: latency-svc-dp6b7 May 20 22:04:43.328: INFO: Got endpoints: latency-svc-r4lj2 [749.742491ms] May 20 22:04:43.333: INFO: Created: latency-svc-zl6hp May 20 22:04:43.377: INFO: Got endpoints: latency-svc-nkwcx [748.875895ms] May 20 22:04:43.383: INFO: Created: latency-svc-h257x May 20 22:04:43.428: INFO: Got endpoints: latency-svc-g8bzh [750.500884ms] May 20 22:04:43.435: INFO: Created: latency-svc-lk6dl May 20 22:04:43.479: INFO: Got endpoints: latency-svc-mlkgx [751.611147ms] May 20 22:04:43.485: INFO: Created: latency-svc-6w24s May 20 22:04:43.527: INFO: Got endpoints: latency-svc-5zztm [750.402052ms] May 20 22:04:43.533: INFO: Created: latency-svc-pdz8j May 20 22:04:43.578: INFO: Got endpoints: latency-svc-xtmhx [750.228733ms] May 20 22:04:43.585: INFO: Created: latency-svc-lml5n May 20 22:04:43.628: INFO: Got endpoints: latency-svc-b97pz [750.458102ms] May 20 22:04:43.634: INFO: Created: latency-svc-gp7ns May 20 22:04:43.677: INFO: Got endpoints: latency-svc-wl6lw [749.22833ms] May 20 22:04:43.683: INFO: Created: latency-svc-ddxz8 May 20 22:04:43.728: INFO: Got endpoints: latency-svc-ch96v [749.421918ms] May 20 22:04:43.733: INFO: Created: latency-svc-wtfnp May 20 22:04:43.777: INFO: Got endpoints: latency-svc-8qwrs [749.078144ms] May 20 22:04:43.782: INFO: Created: latency-svc-jnb8w May 20 22:04:43.827: INFO: Got endpoints: latency-svc-zm7sk [750.691294ms] May 20 22:04:43.833: INFO: Created: latency-svc-rkfp5 May 20 22:04:43.878: INFO: Got endpoints: latency-svc-jbdw6 [750.346763ms] May 20 22:04:43.883: INFO: Created: latency-svc-7xn7b May 20 22:04:43.927: INFO: Got endpoints: latency-svc-kl6mg [750.336053ms] May 20 22:04:43.933: INFO: Created: latency-svc-mqms5 May 20 22:04:43.977: INFO: Got endpoints: latency-svc-lh7kv [749.6073ms] May 20 22:04:43.983: INFO: Created: latency-svc-njjsd May 20 22:04:44.028: INFO: Got endpoints: latency-svc-dp6b7 [749.939413ms] May 20 22:04:44.033: INFO: Created: latency-svc-p2f4d May 20 22:04:44.077: INFO: Got endpoints: latency-svc-zl6hp [749.599576ms] May 20 22:04:44.084: INFO: Created: latency-svc-g9lbc May 20 22:04:44.127: INFO: Got endpoints: latency-svc-h257x [750.217776ms] May 20 22:04:44.133: INFO: Created: latency-svc-d7h4k May 20 22:04:44.177: INFO: Got endpoints: latency-svc-lk6dl [748.883474ms] May 20 22:04:44.182: INFO: Created: latency-svc-7bmh6 May 20 22:04:44.228: INFO: Got endpoints: latency-svc-6w24s [747.954391ms] May 20 22:04:44.234: INFO: Created: latency-svc-r2459 May 20 22:04:44.277: INFO: Got endpoints: latency-svc-pdz8j [749.746648ms] May 20 22:04:44.284: INFO: Created: latency-svc-fz65k May 20 22:04:44.328: INFO: Got endpoints: latency-svc-lml5n [750.228639ms] May 20 22:04:44.334: INFO: Created: latency-svc-pmh2q May 20 22:04:44.377: INFO: Got endpoints: latency-svc-gp7ns [749.24381ms] May 20 22:04:44.382: INFO: Created: latency-svc-htcxc May 20 22:04:44.427: INFO: Got endpoints: latency-svc-ddxz8 [750.329091ms] May 20 22:04:44.435: INFO: Created: latency-svc-kk4nc May 20 22:04:44.477: INFO: Got endpoints: latency-svc-wtfnp [749.350503ms] May 20 22:04:44.483: INFO: Created: latency-svc-nqdhk May 20 22:04:44.528: INFO: Got endpoints: latency-svc-jnb8w [751.294381ms] May 20 22:04:44.534: INFO: Created: latency-svc-t628x May 20 22:04:44.578: INFO: Got endpoints: latency-svc-rkfp5 [750.245022ms] May 20 22:04:44.584: INFO: Created: latency-svc-5wsbj May 20 22:04:44.628: INFO: Got endpoints: latency-svc-7xn7b [749.789367ms] May 20 22:04:44.634: INFO: Created: latency-svc-ndlbn May 20 22:04:44.678: INFO: Got endpoints: latency-svc-mqms5 [750.166291ms] May 20 22:04:44.684: INFO: Created: latency-svc-nrszc May 20 22:04:44.727: INFO: Got endpoints: latency-svc-njjsd [749.342485ms] May 20 22:04:44.732: INFO: Created: latency-svc-4777g May 20 22:04:44.777: INFO: Got endpoints: latency-svc-p2f4d [749.672025ms] May 20 22:04:44.783: INFO: Created: latency-svc-gfflr May 20 22:04:44.828: INFO: Got endpoints: latency-svc-g9lbc [750.450856ms] May 20 22:04:44.833: INFO: Created: latency-svc-qctq6 May 20 22:04:44.877: INFO: Got endpoints: latency-svc-d7h4k [749.451522ms] May 20 22:04:44.883: INFO: Created: latency-svc-fjmm7 May 20 22:04:44.927: INFO: Got endpoints: latency-svc-7bmh6 [750.601526ms] May 20 22:04:44.933: INFO: Created: latency-svc-vhfnr May 20 22:04:44.977: INFO: Got endpoints: latency-svc-r2459 [749.295152ms] May 20 22:04:44.982: INFO: Created: latency-svc-x57gr May 20 22:04:45.027: INFO: Got endpoints: latency-svc-fz65k [749.39978ms] May 20 22:04:45.032: INFO: Created: latency-svc-xd6zp May 20 22:04:45.077: INFO: Got endpoints: latency-svc-pmh2q [748.645903ms] May 20 22:04:45.082: INFO: Created: latency-svc-4rf55 May 20 22:04:45.127: INFO: Got endpoints: latency-svc-htcxc [750.480257ms] May 20 22:04:45.136: INFO: Created: latency-svc-njsj4 May 20 22:04:45.177: INFO: Got endpoints: latency-svc-kk4nc [749.570375ms] May 20 22:04:45.182: INFO: Created: latency-svc-h7kdg May 20 22:04:45.228: INFO: Got endpoints: latency-svc-nqdhk [750.493275ms] May 20 22:04:45.236: INFO: Created: latency-svc-z8p5z May 20 22:04:45.277: INFO: Got endpoints: latency-svc-t628x [748.948338ms] May 20 22:04:45.286: INFO: Created: latency-svc-bndtz May 20 22:04:45.328: INFO: Got endpoints: latency-svc-5wsbj [749.696265ms] May 20 22:04:45.333: INFO: Created: latency-svc-h4hl8 May 20 22:04:45.378: INFO: Got endpoints: latency-svc-ndlbn [749.792811ms] May 20 22:04:45.383: INFO: Created: latency-svc-p7r9h May 20 22:04:45.429: INFO: Got endpoints: latency-svc-nrszc [751.169366ms] May 20 22:04:45.434: INFO: Created: latency-svc-df6n9 May 20 22:04:45.477: INFO: Got endpoints: latency-svc-4777g [749.784779ms] May 20 22:04:45.482: INFO: Created: latency-svc-ngqj2 May 20 22:04:45.527: INFO: Got endpoints: latency-svc-gfflr [749.611851ms] May 20 22:04:45.532: INFO: Created: latency-svc-rcddr May 20 22:04:45.578: INFO: Got endpoints: latency-svc-qctq6 [749.875177ms] May 20 22:04:45.584: INFO: Created: latency-svc-fxb7t May 20 22:04:45.628: INFO: Got endpoints: latency-svc-fjmm7 [750.594575ms] May 20 22:04:45.634: INFO: Created: latency-svc-8bhtc May 20 22:04:45.679: INFO: Got endpoints: latency-svc-vhfnr [751.070002ms] May 20 22:04:45.684: INFO: Created: latency-svc-7szqg May 20 22:04:45.728: INFO: Got endpoints: latency-svc-x57gr [751.19735ms] May 20 22:04:45.735: INFO: Created: latency-svc-bqqq9 May 20 22:04:45.777: INFO: Got endpoints: latency-svc-xd6zp [749.898952ms] May 20 22:04:45.783: INFO: Created: latency-svc-9xvcc May 20 22:04:45.828: INFO: Got endpoints: latency-svc-4rf55 [750.807137ms] May 20 22:04:45.834: INFO: Created: latency-svc-b8ptm May 20 22:04:45.878: INFO: Got endpoints: latency-svc-njsj4 [750.283831ms] May 20 22:04:45.883: INFO: Created: latency-svc-v2ld7 May 20 22:04:45.927: INFO: Got endpoints: latency-svc-h7kdg [750.328555ms] May 20 22:04:45.933: INFO: Created: latency-svc-mflch May 20 22:04:45.977: INFO: Got endpoints: latency-svc-z8p5z [749.430556ms] May 20 22:04:45.984: INFO: Created: latency-svc-dgz5k May 20 22:04:46.027: INFO: Got endpoints: latency-svc-bndtz [749.811099ms] May 20 22:04:46.032: INFO: Created: latency-svc-zf4xn May 20 22:04:46.077: INFO: Got endpoints: latency-svc-h4hl8 [749.568903ms] May 20 22:04:46.083: INFO: Created: latency-svc-8m9lw May 20 22:04:46.127: INFO: Got endpoints: latency-svc-p7r9h [749.344023ms] May 20 22:04:46.133: INFO: Created: latency-svc-pbtj4 May 20 22:04:46.177: INFO: Got endpoints: latency-svc-df6n9 [747.962162ms] May 20 22:04:46.182: INFO: Created: latency-svc-wjbrt May 20 22:04:46.227: INFO: Got endpoints: latency-svc-ngqj2 [750.298526ms] May 20 22:04:46.233: INFO: Created: latency-svc-l96pb May 20 22:04:46.278: INFO: Got endpoints: latency-svc-rcddr [750.895057ms] May 20 22:04:46.283: INFO: Created: latency-svc-xn7zm May 20 22:04:46.327: INFO: Got endpoints: latency-svc-fxb7t [749.322174ms] May 20 22:04:46.333: INFO: Created: latency-svc-n6lr6 May 20 22:04:46.378: INFO: Got endpoints: latency-svc-8bhtc [750.860912ms] May 20 22:04:46.395: INFO: Created: latency-svc-k4xdz May 20 22:04:46.428: INFO: Got endpoints: latency-svc-7szqg [749.799828ms] May 20 22:04:46.435: INFO: Created: latency-svc-fdxdv May 20 22:04:46.478: INFO: Got endpoints: latency-svc-bqqq9 [749.494773ms] May 20 22:04:46.485: INFO: Created: latency-svc-mnhfj May 20 22:04:46.577: INFO: Got endpoints: latency-svc-9xvcc [800.820752ms] May 20 22:04:46.584: INFO: Created: latency-svc-5g5qs May 20 22:04:46.628: INFO: Got endpoints: latency-svc-b8ptm [799.776338ms] May 20 22:04:46.633: INFO: Created: latency-svc-ckg6s May 20 22:04:46.678: INFO: Got endpoints: latency-svc-v2ld7 [800.022566ms] May 20 22:04:46.683: INFO: Created: latency-svc-w4rr4 May 20 22:04:46.727: INFO: Got endpoints: latency-svc-mflch [799.853406ms] May 20 22:04:46.735: INFO: Created: latency-svc-khkmq May 20 22:04:46.777: INFO: Got endpoints: latency-svc-dgz5k [800.204522ms] May 20 22:04:46.783: INFO: Created: latency-svc-kgpdx May 20 22:04:46.828: INFO: Got endpoints: latency-svc-zf4xn [801.077869ms] May 20 22:04:46.834: INFO: Created: latency-svc-vlr4d May 20 22:04:46.877: INFO: Got endpoints: latency-svc-8m9lw [800.208989ms] May 20 22:04:46.882: INFO: Created: latency-svc-8fm5p May 20 22:04:46.928: INFO: Got endpoints: latency-svc-pbtj4 [800.799955ms] May 20 22:04:46.933: INFO: Created: latency-svc-kgf46 May 20 22:04:46.977: INFO: Got endpoints: latency-svc-wjbrt [800.447901ms] May 20 22:04:46.983: INFO: Created: latency-svc-pwt7j May 20 22:04:47.028: INFO: Got endpoints: latency-svc-l96pb [800.450798ms] May 20 22:04:47.032: INFO: Created: latency-svc-2qhvc May 20 22:04:47.076: INFO: Got endpoints: latency-svc-xn7zm [798.456804ms] May 20 22:04:47.127: INFO: Got endpoints: latency-svc-n6lr6 [799.423927ms] May 20 22:04:47.177: INFO: Got endpoints: latency-svc-k4xdz [798.744252ms] May 20 22:04:47.231: INFO: Got endpoints: latency-svc-fdxdv [802.702328ms] May 20 22:04:47.277: INFO: Got endpoints: latency-svc-mnhfj [799.635218ms] May 20 22:04:47.327: INFO: Got endpoints: latency-svc-5g5qs [750.046901ms] May 20 22:04:47.378: INFO: Got endpoints: latency-svc-ckg6s [749.78982ms] May 20 22:04:47.427: INFO: Got endpoints: latency-svc-w4rr4 [749.664088ms] May 20 22:04:47.479: INFO: Got endpoints: latency-svc-khkmq [752.07799ms] May 20 22:04:47.528: INFO: Got endpoints: latency-svc-kgpdx [750.533759ms] May 20 22:04:47.577: INFO: Got endpoints: latency-svc-vlr4d [749.081217ms] May 20 22:04:47.680: INFO: Got endpoints: latency-svc-8fm5p [802.89414ms] May 20 22:04:47.727: INFO: Got endpoints: latency-svc-kgf46 [799.073105ms] May 20 22:04:47.779: INFO: Got endpoints: latency-svc-pwt7j [801.889995ms] May 20 22:04:47.827: INFO: Got endpoints: latency-svc-2qhvc [799.507214ms] May 20 22:04:47.827: INFO: Latencies: [8.506964ms 11.877205ms 15.033015ms 17.288549ms 19.503492ms 22.885159ms 25.270237ms 28.267233ms 33.968627ms 34.504292ms 37.505237ms 39.998834ms 41.944798ms 42.201007ms 42.73082ms 42.832978ms 43.189626ms 43.209302ms 43.954348ms 44.122348ms 44.595547ms 44.810988ms 44.845507ms 44.933483ms 45.08903ms 45.100412ms 45.330152ms 45.351171ms 45.427721ms 45.524731ms 48.093819ms 91.585913ms 138.458421ms 186.787891ms 232.114ms 278.87907ms 326.145532ms 372.968814ms 420.710707ms 467.087375ms 514.094077ms 561.293201ms 608.529763ms 655.857069ms 703.224095ms 723.050881ms 746.144013ms 747.954391ms 747.962162ms 748.130451ms 748.53192ms 748.645903ms 748.86802ms 748.875895ms 748.883474ms 748.948338ms 749.049807ms 749.078144ms 749.081217ms 749.22833ms 749.235335ms 749.24381ms 749.276596ms 749.288296ms 749.295152ms 749.322174ms 749.327374ms 749.342485ms 749.344023ms 749.344206ms 749.350503ms 749.354419ms 749.392625ms 749.39978ms 749.421918ms 749.430556ms 749.451522ms 749.483276ms 749.494773ms 749.553482ms 749.557078ms 749.568903ms 749.570375ms 749.599576ms 749.6073ms 749.609736ms 749.611851ms 749.645892ms 749.664088ms 749.672025ms 749.696265ms 749.714899ms 749.742491ms 749.746648ms 749.77051ms 749.776924ms 749.784779ms 749.789367ms 749.78982ms 749.792811ms 749.799828ms 749.811099ms 749.826326ms 749.854088ms 749.875177ms 749.898952ms 749.919015ms 749.932594ms 749.937083ms 749.939413ms 749.971082ms 749.982046ms 749.990048ms 750.046901ms 750.050422ms 750.080044ms 750.103072ms 750.110691ms 750.166291ms 750.217776ms 750.228639ms 750.228733ms 750.245022ms 750.264301ms 750.283831ms 750.298526ms 750.328555ms 750.329091ms 750.336053ms 750.346763ms 750.402052ms 750.414702ms 750.439094ms 750.450856ms 750.458102ms 750.463146ms 750.480257ms 750.493275ms 750.500884ms 750.533759ms 750.550548ms 750.590138ms 750.594575ms 750.601526ms 750.651926ms 750.691294ms 750.747275ms 750.807137ms 750.860912ms 750.866734ms 750.895057ms 750.94758ms 751.070002ms 751.116408ms 751.169366ms 751.19735ms 751.271101ms 751.294381ms 751.466239ms 751.611147ms 751.650779ms 751.738406ms 752.07799ms 753.191606ms 777.039412ms 798.456804ms 798.744252ms 799.073105ms 799.423927ms 799.507214ms 799.635218ms 799.776338ms 799.853406ms 800.022566ms 800.160696ms 800.204522ms 800.208989ms 800.447901ms 800.450798ms 800.656013ms 800.799955ms 800.820752ms 801.077869ms 801.889995ms 802.702328ms 802.89414ms 848.284805ms 849.170547ms 849.209418ms 849.360685ms 849.595543ms 849.737445ms 849.833709ms 850.141781ms 850.383645ms 850.458478ms 850.582252ms 850.755118ms 850.869812ms 851.60953ms] May 20 22:04:47.828: INFO: 50 %ile: 749.799828ms May 20 22:04:47.828: INFO: 90 %ile: 800.799955ms May 20 22:04:47.828: INFO: 99 %ile: 850.869812ms May 20 22:04:47.828: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:04:47.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-4554" for this suite. • [SLOW TEST:11.979 seconds] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":-1,"completed":18,"skipped":263,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:04:30.887: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-5252 [It] should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating statefulset ss in namespace statefulset-5252 May 20 22:04:30.918: INFO: Found 0 stateful pods, waiting for 1 May 20 22:04:40.923: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified STEP: Patch a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 May 20 22:04:40.943: INFO: Deleting all statefulset in ns statefulset-5252 May 20 22:04:40.946: INFO: Scaling statefulset ss to 0 May 20 22:04:50.958: INFO: Waiting for statefulset status.replicas updated to 0 May 20 22:04:50.961: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:04:50.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5252" for this suite. • [SLOW TEST:20.092 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":-1,"completed":10,"skipped":112,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:04:47.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name s-test-opt-del-48ca8ad4-48ca-4a5b-abdc-e35e1c090107 STEP: Creating secret with name s-test-opt-upd-b6a5f4bd-df3d-4cb7-ad79-06c5af76e8ba STEP: Creating the pod May 20 22:04:47.785: INFO: The status of Pod pod-secrets-566a5e68-d95b-4f7b-9553-fb0ac7df1e8b is Pending, waiting for it to be Running (with Ready = true) May 20 22:04:49.790: INFO: The status of Pod pod-secrets-566a5e68-d95b-4f7b-9553-fb0ac7df1e8b is Pending, waiting for it to be Running (with Ready = true) May 20 22:04:51.789: INFO: The status of Pod pod-secrets-566a5e68-d95b-4f7b-9553-fb0ac7df1e8b is Pending, waiting for it to be Running (with Ready = true) May 20 22:04:53.790: INFO: The status of Pod pod-secrets-566a5e68-d95b-4f7b-9553-fb0ac7df1e8b is Running (Ready = true) STEP: Deleting secret s-test-opt-del-48ca8ad4-48ca-4a5b-abdc-e35e1c090107 STEP: Updating secret s-test-opt-upd-b6a5f4bd-df3d-4cb7-ad79-06c5af76e8ba STEP: Creating secret with name s-test-opt-create-9a5cbbe7-3140-452c-9b26-c46eb12ef69b STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:04:57.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7239" for this suite. • [SLOW TEST:10.121 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":52,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:04:31.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: referencing a single matching pod STEP: referencing matching pods with named port STEP: creating empty Endpoints and EndpointSlices for no matching Pods STEP: recreating EndpointSlices after they've been deleted May 20 22:04:51.518: INFO: EndpointSlice for Service endpointslice-8909/example-named-port not found [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:05:01.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-8909" for this suite. • [SLOW TEST:30.113 seconds] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","total":-1,"completed":12,"skipped":228,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:04:36.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 20 22:04:36.777: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 20 22:04:41.779: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 20 22:04:41.779: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 20 22:04:43.783: INFO: Creating deployment "test-rollover-deployment" May 20 22:04:43.788: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 20 22:04:45.794: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 20 22:04:45.798: INFO: Ensure that both replica sets have 1 created replica May 20 22:04:45.803: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 20 22:04:45.809: INFO: Updating deployment test-rollover-deployment May 20 22:04:45.809: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 20 22:04:47.814: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 20 22:04:47.819: INFO: Make sure deployment "test-rollover-deployment" is complete May 20 22:04:47.824: INFO: all replica sets need to contain the pod-template-hash label May 20 22:04:47.824: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681083, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681083, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681085, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681083, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} May 20 22:04:49.832: INFO: all replica sets need to contain the pod-template-hash label May 20 22:04:49.832: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681083, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681083, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681085, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681083, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} May 20 22:04:51.830: INFO: all replica sets need to contain the pod-template-hash label May 20 22:04:51.830: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681083, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681083, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681090, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681083, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} May 20 22:04:53.831: INFO: all replica sets need to contain the pod-template-hash label May 20 22:04:53.831: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681083, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681083, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681090, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681083, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} May 20 22:04:55.830: INFO: all replica sets need to contain the pod-template-hash label May 20 22:04:55.830: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681083, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681083, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681090, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681083, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} May 20 22:04:57.831: INFO: all replica sets need to contain the pod-template-hash label May 20 22:04:57.831: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681083, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681083, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681090, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681083, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} May 20 22:04:59.833: INFO: all replica sets need to contain the pod-template-hash label May 20 22:04:59.833: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681083, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681083, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681090, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681083, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} May 20 22:05:01.830: INFO: May 20 22:05:01.830: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 May 20 22:05:01.836: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-3750 edf776dd-6217-4237-854a-cd1dd7b4b7dd 38321 2 2022-05-20 22:04:43 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2022-05-20 22:04:45 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2022-05-20 22:05:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00133d1b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2022-05-20 22:04:43 +0000 UTC,LastTransitionTime:2022-05-20 22:04:43 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-98c5f4599" has successfully progressed.,LastUpdateTime:2022-05-20 22:05:00 +0000 UTC,LastTransitionTime:2022-05-20 22:04:43 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 20 22:05:01.839: INFO: New ReplicaSet "test-rollover-deployment-98c5f4599" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-98c5f4599 deployment-3750 efba4719-62d6-411b-bec8-2d209318dea5 38310 2 2022-05-20 22:04:45 +0000 UTC map[name:rollover-pod pod-template-hash:98c5f4599] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment edf776dd-6217-4237-854a-cd1dd7b4b7dd 0xc00133d730 0xc00133d731}] [] [{kube-controller-manager Update apps/v1 2022-05-20 22:05:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"edf776dd-6217-4237-854a-cd1dd7b4b7dd\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 98c5f4599,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:98c5f4599] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00133d7a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 20 22:05:01.839: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 20 22:05:01.839: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-3750 682c6475-d227-4199-ac0e-c2e50dbfe4ae 38320 2 2022-05-20 22:04:36 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment edf776dd-6217-4237-854a-cd1dd7b4b7dd 0xc00133d527 0xc00133d528}] [] [{e2e.test Update apps/v1 2022-05-20 22:04:36 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2022-05-20 22:05:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"edf776dd-6217-4237-854a-cd1dd7b4b7dd\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00133d5c8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 20 22:05:01.840: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-78bc8b888c deployment-3750 9e4144b9-cd6c-4b0f-8bef-6dbe7aeb6a84 36942 2 2022-05-20 22:04:43 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment edf776dd-6217-4237-854a-cd1dd7b4b7dd 0xc00133d637 0xc00133d638}] [] [{kube-controller-manager Update apps/v1 2022-05-20 22:04:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"edf776dd-6217-4237-854a-cd1dd7b4b7dd\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78bc8b888c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00133d6c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 20 22:05:01.842: INFO: Pod "test-rollover-deployment-98c5f4599-97d4m" is available: &Pod{ObjectMeta:{test-rollover-deployment-98c5f4599-97d4m test-rollover-deployment-98c5f4599- deployment-3750 59471e45-ce9d-4f6f-8bb0-feca3b622cf3 37143 0 2022-05-20 22:04:45 +0000 UTC map[name:rollover-pod pod-template-hash:98c5f4599] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.213" ], "mac": "42:6d:88:43:f5:44", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.213" ], "mac": "42:6d:88:43:f5:44", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-rollover-deployment-98c5f4599 efba4719-62d6-411b-bec8-2d209318dea5 0xc00133dcaf 0xc00133dcc0}] [] [{kube-controller-manager Update v1 2022-05-20 22:04:45 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"efba4719-62d6-411b-bec8-2d209318dea5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-05-20 22:04:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-05-20 22:04:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.213\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-59v9g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-59v9g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:04:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:04:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:04:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:04:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.4.213,StartTime:2022-05-20 22:04:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-05-20 22:04:49 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:docker://5c7d0f0d30af696fd3bcec1c8c2ab0eaa52dea75390a5b37148df922d12fc00a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.213,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:05:01.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3750" for this suite. • [SLOW TEST:25.101 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":-1,"completed":12,"skipped":270,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:04:57.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 20 22:04:57.972: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-c09e9db3-a8ae-446c-8279-dae8f942ec40" in namespace "security-context-test-4463" to be "Succeeded or Failed" May 20 22:04:57.975: INFO: Pod "busybox-readonly-false-c09e9db3-a8ae-446c-8279-dae8f942ec40": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117771ms May 20 22:04:59.978: INFO: Pod "busybox-readonly-false-c09e9db3-a8ae-446c-8279-dae8f942ec40": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005937171s May 20 22:05:01.984: INFO: Pod "busybox-readonly-false-c09e9db3-a8ae-446c-8279-dae8f942ec40": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011232777s May 20 22:05:01.984: INFO: Pod "busybox-readonly-false-c09e9db3-a8ae-446c-8279-dae8f942ec40" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:05:01.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4463" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":70,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:05:01.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-map-8e2ab761-bfda-4d91-b32a-6923f325b4b8 STEP: Creating a pod to test consume secrets May 20 22:05:01.649: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a08d9ff6-cf52-4233-bdd8-92c0a2dc116e" in namespace "projected-7012" to be "Succeeded or Failed" May 20 22:05:01.651: INFO: Pod "pod-projected-secrets-a08d9ff6-cf52-4233-bdd8-92c0a2dc116e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.374608ms May 20 22:05:03.656: INFO: Pod "pod-projected-secrets-a08d9ff6-cf52-4233-bdd8-92c0a2dc116e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006436958s May 20 22:05:05.659: INFO: Pod "pod-projected-secrets-a08d9ff6-cf52-4233-bdd8-92c0a2dc116e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009763733s May 20 22:05:07.662: INFO: Pod "pod-projected-secrets-a08d9ff6-cf52-4233-bdd8-92c0a2dc116e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013233617s STEP: Saw pod success May 20 22:05:07.662: INFO: Pod "pod-projected-secrets-a08d9ff6-cf52-4233-bdd8-92c0a2dc116e" satisfied condition "Succeeded or Failed" May 20 22:05:07.665: INFO: Trying to get logs from node node1 pod pod-projected-secrets-a08d9ff6-cf52-4233-bdd8-92c0a2dc116e container projected-secret-volume-test: STEP: delete the pod May 20 22:05:07.685: INFO: Waiting for pod pod-projected-secrets-a08d9ff6-cf52-4233-bdd8-92c0a2dc116e to disappear May 20 22:05:07.687: INFO: Pod pod-projected-secrets-a08d9ff6-cf52-4233-bdd8-92c0a2dc116e no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:05:07.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7012" for this suite. • [SLOW TEST:6.084 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":264,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:04:47.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3965.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3965.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3965.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3965.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 20 22:04:53.909: INFO: DNS probes using dns-test-5d5490ea-965f-4ae9-81f7-b7a26fd218ad succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3965.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3965.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3965.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3965.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 20 22:04:59.949: INFO: DNS probes using dns-test-d6bb8dc1-839a-41bb-96a9-e9e892773d3b succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3965.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-3965.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3965.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-3965.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 20 22:05:07.991: INFO: DNS probes using dns-test-bab10eac-1e61-4d31-8e22-dffbc9328ea1 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:05:08.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3965" for this suite. • [SLOW TEST:20.152 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":-1,"completed":19,"skipped":268,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:05:02.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating the pod May 20 22:05:02.058: INFO: The status of Pod labelsupdate6ded5b3f-cf42-4404-8372-3405651b149d is Pending, waiting for it to be Running (with Ready = true) May 20 22:05:04.061: INFO: The status of Pod labelsupdate6ded5b3f-cf42-4404-8372-3405651b149d is Pending, waiting for it to be Running (with Ready = true) May 20 22:05:06.061: INFO: The status of Pod labelsupdate6ded5b3f-cf42-4404-8372-3405651b149d is Pending, waiting for it to be Running (with Ready = true) May 20 22:05:08.061: INFO: The status of Pod labelsupdate6ded5b3f-cf42-4404-8372-3405651b149d is Running (Ready = true) May 20 22:05:08.580: INFO: Successfully updated pod "labelsupdate6ded5b3f-cf42-4404-8372-3405651b149d" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:05:12.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3039" for this suite. • [SLOW TEST:10.597 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":77,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:05:07.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 20 22:05:07.726: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes May 20 22:05:07.744: INFO: The status of Pod pod-exec-websocket-a9b1c5c1-734c-4866-80a8-42b04e8a4a96 is Pending, waiting for it to be Running (with Ready = true) May 20 22:05:09.748: INFO: The status of Pod pod-exec-websocket-a9b1c5c1-734c-4866-80a8-42b04e8a4a96 is Pending, waiting for it to be Running (with Ready = true) May 20 22:05:11.748: INFO: The status of Pod pod-exec-websocket-a9b1c5c1-734c-4866-80a8-42b04e8a4a96 is Pending, waiting for it to be Running (with Ready = true) May 20 22:05:13.749: INFO: The status of Pod pod-exec-websocket-a9b1c5c1-734c-4866-80a8-42b04e8a4a96 is Running (Ready = true) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:05:13.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4520" for this suite. • [SLOW TEST:6.125 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":268,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:05:08.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-9ab59a53-9e21-4002-8dbc-93315d97a419 STEP: Creating a pod to test consume secrets May 20 22:05:08.150: INFO: Waiting up to 5m0s for pod "pod-secrets-d88995d5-a3af-4450-8844-3645a70d8961" in namespace "secrets-2921" to be "Succeeded or Failed" May 20 22:05:08.152: INFO: Pod "pod-secrets-d88995d5-a3af-4450-8844-3645a70d8961": Phase="Pending", Reason="", readiness=false. Elapsed: 2.167231ms May 20 22:05:10.156: INFO: Pod "pod-secrets-d88995d5-a3af-4450-8844-3645a70d8961": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006433484s May 20 22:05:12.160: INFO: Pod "pod-secrets-d88995d5-a3af-4450-8844-3645a70d8961": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010178021s May 20 22:05:14.164: INFO: Pod "pod-secrets-d88995d5-a3af-4450-8844-3645a70d8961": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013796225s STEP: Saw pod success May 20 22:05:14.164: INFO: Pod "pod-secrets-d88995d5-a3af-4450-8844-3645a70d8961" satisfied condition "Succeeded or Failed" May 20 22:05:14.166: INFO: Trying to get logs from node node1 pod pod-secrets-d88995d5-a3af-4450-8844-3645a70d8961 container secret-volume-test: STEP: delete the pod May 20 22:05:14.177: INFO: Waiting for pod pod-secrets-d88995d5-a3af-4450-8844-3645a70d8961 to disappear May 20 22:05:14.179: INFO: Pod pod-secrets-d88995d5-a3af-4450-8844-3645a70d8961 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:05:14.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2921" for this suite. STEP: Destroying namespace "secret-namespace-3570" for this suite. • [SLOW TEST:6.106 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":300,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:04:51.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-8636 STEP: creating service affinity-clusterip in namespace services-8636 STEP: creating replication controller affinity-clusterip in namespace services-8636 I0520 22:04:51.070059 36 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-8636, replica count: 3 I0520 22:04:54.122368 36 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0520 22:04:57.123100 36 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 20 22:04:57.128: INFO: Creating new exec pod May 20 22:05:05.147: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8636 exec execpod-affinitylpxnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' May 20 22:05:05.405: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" May 20 22:05:05.405: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" May 20 22:05:05.405: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8636 exec execpod-affinitylpxnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.55.203 80' May 20 22:05:05.646: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.55.203 80\nConnection to 10.233.55.203 80 port [tcp/http] succeeded!\n" May 20 22:05:05.646: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" May 20 22:05:05.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8636 exec execpod-affinitylpxnv -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.233.55.203:80/ ; done' May 20 22:05:05.927: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.55.203:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.55.203:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.55.203:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.55.203:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.55.203:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.55.203:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.55.203:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.55.203:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.55.203:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.55.203:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.55.203:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.55.203:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.55.203:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.55.203:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.55.203:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.55.203:80/\n" May 20 22:05:05.927: INFO: stdout: "\naffinity-clusterip-bsxr9\naffinity-clusterip-bsxr9\naffinity-clusterip-bsxr9\naffinity-clusterip-bsxr9\naffinity-clusterip-bsxr9\naffinity-clusterip-bsxr9\naffinity-clusterip-bsxr9\naffinity-clusterip-bsxr9\naffinity-clusterip-bsxr9\naffinity-clusterip-bsxr9\naffinity-clusterip-bsxr9\naffinity-clusterip-bsxr9\naffinity-clusterip-bsxr9\naffinity-clusterip-bsxr9\naffinity-clusterip-bsxr9\naffinity-clusterip-bsxr9" May 20 22:05:05.927: INFO: Received response from host: affinity-clusterip-bsxr9 May 20 22:05:05.927: INFO: Received response from host: affinity-clusterip-bsxr9 May 20 22:05:05.927: INFO: Received response from host: affinity-clusterip-bsxr9 May 20 22:05:05.927: INFO: Received response from host: affinity-clusterip-bsxr9 May 20 22:05:05.927: INFO: Received response from host: affinity-clusterip-bsxr9 May 20 22:05:05.927: INFO: Received response from host: affinity-clusterip-bsxr9 May 20 22:05:05.927: INFO: Received response from host: affinity-clusterip-bsxr9 May 20 22:05:05.927: INFO: Received response from host: affinity-clusterip-bsxr9 May 20 22:05:05.927: INFO: Received response from host: affinity-clusterip-bsxr9 May 20 22:05:05.927: INFO: Received response from host: affinity-clusterip-bsxr9 May 20 22:05:05.927: INFO: Received response from host: affinity-clusterip-bsxr9 May 20 22:05:05.927: INFO: Received response from host: affinity-clusterip-bsxr9 May 20 22:05:05.927: INFO: Received response from host: affinity-clusterip-bsxr9 May 20 22:05:05.927: INFO: Received response from host: affinity-clusterip-bsxr9 May 20 22:05:05.927: INFO: Received response from host: affinity-clusterip-bsxr9 May 20 22:05:05.927: INFO: Received response from host: affinity-clusterip-bsxr9 May 20 22:05:05.927: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip in namespace services-8636, will wait for the garbage collector to delete the pods May 20 22:05:05.992: INFO: Deleting ReplicationController affinity-clusterip took: 3.660845ms May 20 22:05:06.093: INFO: Terminating ReplicationController affinity-clusterip pods took: 101.031289ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:05:17.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8636" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:26.378 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":11,"skipped":140,"failed":0} [BeforeEach] [sig-node] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:05:17.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:05:17.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-8355" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Lease lease API should be available [Conformance]","total":-1,"completed":12,"skipped":140,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:05:12.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 20 22:05:17.728: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:05:17.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7344" for this suite. • [SLOW TEST:5.072 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":112,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:04:29.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation May 20 22:04:29.444: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation May 20 22:04:50.502: INFO: >>> kubeConfig: /root/.kube/config May 20 22:04:59.156: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:05:18.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7512" for this suite. • [SLOW TEST:49.033 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":-1,"completed":12,"skipped":266,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:05:18.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod with one valid and two invalid sysctls [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:05:18.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-2801" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":13,"skipped":273,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:05:14.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should observe PodDisruptionBudget status updated [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Waiting for the pdb to be processed STEP: Waiting for all pods to be running May 20 22:05:16.282: INFO: running pods: 0 < 3 May 20 22:05:18.286: INFO: running pods: 0 < 3 May 20 22:05:20.286: INFO: running pods: 1 < 3 [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:05:22.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-8066" for this suite. • [SLOW TEST:8.079 seconds] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should observe PodDisruptionBudget status updated [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]","total":-1,"completed":21,"skipped":318,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:05:13.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 20 22:05:14.293: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 20 22:05:16.300: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681114, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681114, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681114, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681114, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 20 22:05:18.304: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681114, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681114, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681114, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681114, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 20 22:05:21.311: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:05:22.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6687" for this suite. STEP: Destroying namespace "webhook-6687-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.527 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":-1,"completed":15,"skipped":274,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:05:18.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 20 22:05:18.557: INFO: The status of Pod busybox-readonly-fs95235c0a-e20a-4c44-a31c-4586b0e45df2 is Pending, waiting for it to be Running (with Ready = true) May 20 22:05:20.560: INFO: The status of Pod busybox-readonly-fs95235c0a-e20a-4c44-a31c-4586b0e45df2 is Pending, waiting for it to be Running (with Ready = true) May 20 22:05:22.560: INFO: The status of Pod busybox-readonly-fs95235c0a-e20a-4c44-a31c-4586b0e45df2 is Pending, waiting for it to be Running (with Ready = true) May 20 22:05:24.561: INFO: The status of Pod busybox-readonly-fs95235c0a-e20a-4c44-a31c-4586b0e45df2 is Running (Ready = true) [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:05:24.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-261" for this suite. • [SLOW TEST:6.054 seconds] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when scheduling a read only busybox container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:188 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":278,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:02:48.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod with failed condition STEP: updating the pod May 20 22:04:48.820: INFO: Successfully updated pod "var-expansion-68bb6ed1-d879-453d-ad0d-f4dc0648d227" STEP: waiting for pod running STEP: deleting the pod gracefully May 20 22:04:50.828: INFO: Deleting pod "var-expansion-68bb6ed1-d879-453d-ad0d-f4dc0648d227" in namespace "var-expansion-6312" May 20 22:04:50.833: INFO: Wait up to 5m0s for pod "var-expansion-68bb6ed1-d879-453d-ad0d-f4dc0648d227" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:05:28.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6312" for this suite. • [SLOW TEST:160.584 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]","total":-1,"completed":8,"skipped":48,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:05:28.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be immutable if `immutable` field is set [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:05:28.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2415" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":9,"skipped":60,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:05:28.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin May 20 22:05:28.992: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1a1526f1-2641-4c9d-bee4-c74c8a6a1a4a" in namespace "projected-1414" to be "Succeeded or Failed" May 20 22:05:28.995: INFO: Pod "downwardapi-volume-1a1526f1-2641-4c9d-bee4-c74c8a6a1a4a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.778191ms May 20 22:05:30.999: INFO: Pod "downwardapi-volume-1a1526f1-2641-4c9d-bee4-c74c8a6a1a4a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006751601s May 20 22:05:33.003: INFO: Pod "downwardapi-volume-1a1526f1-2641-4c9d-bee4-c74c8a6a1a4a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01003802s STEP: Saw pod success May 20 22:05:33.003: INFO: Pod "downwardapi-volume-1a1526f1-2641-4c9d-bee4-c74c8a6a1a4a" satisfied condition "Succeeded or Failed" May 20 22:05:33.005: INFO: Trying to get logs from node node1 pod downwardapi-volume-1a1526f1-2641-4c9d-bee4-c74c8a6a1a4a container client-container: STEP: delete the pod May 20 22:05:33.019: INFO: Waiting for pod downwardapi-volume-1a1526f1-2641-4c9d-bee4-c74c8a6a1a4a to disappear May 20 22:05:33.021: INFO: Pod downwardapi-volume-1a1526f1-2641-4c9d-bee4-c74c8a6a1a4a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:05:33.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1414" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":61,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:03:10.668: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a service externalname-service with the type=ExternalName in namespace services-5647 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-5647 I0520 22:03:10.710609 26 runners.go:190] Created replication controller with name: externalname-service, namespace: services-5647, replica count: 2 I0520 22:03:13.761514 26 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0520 22:03:16.761703 26 runners.go:190] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0520 22:03:19.763735 26 runners.go:190] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0520 22:03:22.764757 26 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 20 22:03:22.764: INFO: Creating new exec pod May 20 22:03:29.787: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' May 20 22:03:30.078: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" May 20 22:03:30.078: INFO: stdout: "externalname-service-hbmm6" May 20 22:03:30.078: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.59.141 80' May 20 22:03:30.822: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.59.141 80\nConnection to 10.233.59.141 80 port [tcp/http] succeeded!\n" May 20 22:03:30.822: INFO: stdout: "" May 20 22:03:31.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.59.141 80' May 20 22:03:32.077: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.59.141 80\nConnection to 10.233.59.141 80 port [tcp/http] succeeded!\n" May 20 22:03:32.077: INFO: stdout: "externalname-service-wmrc8" May 20 22:03:32.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:03:32.318: INFO: rc: 1 May 20 22:03:32.318: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:33.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:03:33.660: INFO: rc: 1 May 20 22:03:33.660: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:34.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:03:34.663: INFO: rc: 1 May 20 22:03:34.663: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:35.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:03:35.574: INFO: rc: 1 May 20 22:03:35.574: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:36.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:03:36.540: INFO: rc: 1 May 20 22:03:36.540: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:37.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:03:37.584: INFO: rc: 1 May 20 22:03:37.584: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:38.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:03:38.895: INFO: rc: 1 May 20 22:03:38.895: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:39.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:03:39.559: INFO: rc: 1 May 20 22:03:39.559: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:40.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:03:40.627: INFO: rc: 1 May 20 22:03:40.627: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:41.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:03:41.562: INFO: rc: 1 May 20 22:03:41.562: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:42.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:03:42.558: INFO: rc: 1 May 20 22:03:42.559: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:43.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:03:43.561: INFO: rc: 1 May 20 22:03:43.561: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:44.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:03:44.573: INFO: rc: 1 May 20 22:03:44.573: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:45.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:03:45.595: INFO: rc: 1 May 20 22:03:45.595: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:46.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:03:46.598: INFO: rc: 1 May 20 22:03:46.598: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:47.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:03:47.576: INFO: rc: 1 May 20 22:03:47.576: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:48.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:03:48.573: INFO: rc: 1 May 20 22:03:48.573: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:49.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:03:49.722: INFO: rc: 1 May 20 22:03:49.723: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:50.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:03:50.924: INFO: rc: 1 May 20 22:03:50.925: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:51.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:03:51.574: INFO: rc: 1 May 20 22:03:51.574: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:52.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:03:52.829: INFO: rc: 1 May 20 22:03:52.830: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:53.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:03:53.559: INFO: rc: 1 May 20 22:03:53.559: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:54.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:03:54.809: INFO: rc: 1 May 20 22:03:54.810: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:55.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:03:55.602: INFO: rc: 1 May 20 22:03:55.602: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:56.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:03:56.765: INFO: rc: 1 May 20 22:03:56.765: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:57.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:03:57.662: INFO: rc: 1 May 20 22:03:57.662: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:58.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:03:58.689: INFO: rc: 1 May 20 22:03:58.689: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:03:59.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:03:59.565: INFO: rc: 1 May 20 22:03:59.566: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:00.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:04:00.541: INFO: rc: 1 May 20 22:04:00.541: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:01.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:04:01.582: INFO: rc: 1 May 20 22:04:01.582: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:02.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:04:02.667: INFO: rc: 1 May 20 22:04:02.667: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:03.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:04:03.654: INFO: rc: 1 May 20 22:04:03.654: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:04.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:04:04.569: INFO: rc: 1 May 20 22:04:04.569: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:05.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:04:05.571: INFO: rc: 1 May 20 22:04:05.571: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:06.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:04:06.561: INFO: rc: 1 May 20 22:04:06.561: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:07.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:04:07.562: INFO: rc: 1 May 20 22:04:07.562: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:08.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:04:08.552: INFO: rc: 1 May 20 22:04:08.552: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName+ nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:09.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:04:09.572: INFO: rc: 1 May 20 22:04:09.572: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:10.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:04:10.543: INFO: rc: 1 May 20 22:04:10.543: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:11.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:04:11.585: INFO: rc: 1 May 20 22:04:11.585: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:12.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:04:12.565: INFO: rc: 1 May 20 22:04:12.565: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30986 + echo hostName nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:13.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:04:13.555: INFO: rc: 1 May 20 22:04:13.555: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:14.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:04:14.572: INFO: rc: 1 May 20 22:04:14.573: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:15.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:04:15.553: INFO: rc: 1 May 20 22:04:15.553: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:16.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:04:16.577: INFO: rc: 1 May 20 22:04:16.578: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:17.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:04:17.583: INFO: rc: 1 May 20 22:04:17.583: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:18.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:04:18.548: INFO: rc: 1 May 20 22:04:18.548: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:19.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:04:19.561: INFO: rc: 1 May 20 22:04:19.561: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:20.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:04:20.625: INFO: rc: 1 May 20 22:04:20.625: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:21.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:04:21.628: INFO: rc: 1 May 20 22:04:21.628: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:22.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:04:22.571: INFO: rc: 1 May 20 22:04:22.571: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:23.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:04:23.570: INFO: rc: 1 May 20 22:04:23.570: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:24.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:04:24.555: INFO: rc: 1 May 20 22:04:24.555: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:25.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:04:25.745: INFO: rc: 1 May 20 22:04:25.745: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:26.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:04:26.639: INFO: rc: 1 May 20 22:04:26.639: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:27.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:04:27.642: INFO: rc: 1 May 20 22:04:27.642: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + + echonc -v hostName -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:28.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:04:28.619: INFO: rc: 1 May 20 22:04:28.619: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:29.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:04:29.596: INFO: rc: 1 May 20 22:04:29.596: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:30.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:04:30.733: INFO: rc: 1 May 20 22:04:30.733: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:31.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:04:31.624: INFO: rc: 1 May 20 22:04:31.624: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:32.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:04:32.693: INFO: rc: 1 May 20 22:04:32.693: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:33.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:04:34.081: INFO: rc: 1 May 20 22:04:34.081: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:34.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:04:34.699: INFO: rc: 1 May 20 22:04:34.699: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:35.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:04:35.567: INFO: rc: 1 May 20 22:04:35.567: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:36.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:04:36.601: INFO: rc: 1 May 20 22:04:36.601: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:37.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:04:37.705: INFO: rc: 1 May 20 22:04:37.705: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:38.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:04:38.573: INFO: rc: 1 May 20 22:04:38.574: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:39.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:04:39.539: INFO: rc: 1 May 20 22:04:39.539: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:40.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:04:40.557: INFO: rc: 1 May 20 22:04:40.557: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:41.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:04:41.592: INFO: rc: 1 May 20 22:04:41.592: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:42.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:04:42.563: INFO: rc: 1 May 20 22:04:42.563: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:43.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:04:43.617: INFO: rc: 1 May 20 22:04:43.617: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:44.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:04:44.573: INFO: rc: 1 May 20 22:04:44.573: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:45.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:04:45.557: INFO: rc: 1 May 20 22:04:45.557: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:46.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:04:46.607: INFO: rc: 1 May 20 22:04:46.607: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:47.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:04:47.560: INFO: rc: 1 May 20 22:04:47.560: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:48.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:04:48.556: INFO: rc: 1 May 20 22:04:48.556: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:49.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:04:49.588: INFO: rc: 1 May 20 22:04:49.588: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:50.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:04:50.566: INFO: rc: 1 May 20 22:04:50.566: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:51.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:04:51.713: INFO: rc: 1 May 20 22:04:51.713: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:52.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:04:52.784: INFO: rc: 1 May 20 22:04:52.784: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:53.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:04:53.659: INFO: rc: 1 May 20 22:04:53.659: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:54.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:04:54.650: INFO: rc: 1 May 20 22:04:54.650: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:55.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:04:55.564: INFO: rc: 1 May 20 22:04:55.564: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:56.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:04:56.574: INFO: rc: 1 May 20 22:04:56.574: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:57.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:04:57.583: INFO: rc: 1 May 20 22:04:57.583: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:58.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:04:58.705: INFO: rc: 1 May 20 22:04:58.705: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:04:59.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:04:59.587: INFO: rc: 1 May 20 22:04:59.587: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:00.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:05:00.714: INFO: rc: 1 May 20 22:05:00.715: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:01.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:05:01.574: INFO: rc: 1 May 20 22:05:01.574: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:02.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:05:02.610: INFO: rc: 1 May 20 22:05:02.611: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:03.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:05:03.604: INFO: rc: 1 May 20 22:05:03.604: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:04.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:05:04.610: INFO: rc: 1 May 20 22:05:04.610: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:05.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:05:05.544: INFO: rc: 1 May 20 22:05:05.545: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:06.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:05:06.634: INFO: rc: 1 May 20 22:05:06.634: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:07.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:05:07.578: INFO: rc: 1 May 20 22:05:07.578: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + + ncecho -v -t -w 2 10.10.190.207 hostName 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:08.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:05:08.699: INFO: rc: 1 May 20 22:05:08.699: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:09.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:05:09.591: INFO: rc: 1 May 20 22:05:09.591: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:10.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:05:10.721: INFO: rc: 1 May 20 22:05:10.721: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:11.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:05:11.583: INFO: rc: 1 May 20 22:05:11.583: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:12.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:05:12.543: INFO: rc: 1 May 20 22:05:12.543: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:13.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:05:13.565: INFO: rc: 1 May 20 22:05:13.565: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:14.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:05:14.551: INFO: rc: 1 May 20 22:05:14.551: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:15.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:05:15.998: INFO: rc: 1 May 20 22:05:15.998: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:16.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:05:16.545: INFO: rc: 1 May 20 22:05:16.545: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:17.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:05:17.958: INFO: rc: 1 May 20 22:05:17.958: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:18.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:05:18.876: INFO: rc: 1 May 20 22:05:18.876: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + nc -v -t -w 2+ 10.10.190.207 30986 echo hostName nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:19.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:05:19.579: INFO: rc: 1 May 20 22:05:19.579: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:20.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:05:20.887: INFO: rc: 1 May 20 22:05:20.887: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30986 + echo hostName nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:21.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:05:21.575: INFO: rc: 1 May 20 22:05:21.575: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:22.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:05:22.582: INFO: rc: 1 May 20 22:05:22.582: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:23.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:05:23.601: INFO: rc: 1 May 20 22:05:23.601: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo+ hostName nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:24.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:05:24.892: INFO: rc: 1 May 20 22:05:24.893: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:25.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:05:26.425: INFO: rc: 1 May 20 22:05:26.425: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:27.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:05:28.106: INFO: rc: 1 May 20 22:05:28.106: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:28.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:05:28.942: INFO: rc: 1 May 20 22:05:28.942: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:29.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:05:29.616: INFO: rc: 1 May 20 22:05:29.616: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:30.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:05:30.671: INFO: rc: 1 May 20 22:05:30.671: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:31.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:05:31.756: INFO: rc: 1 May 20 22:05:31.756: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:32.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:05:32.559: INFO: rc: 1 May 20 22:05:32.559: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:32.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986' May 20 22:05:32.812: INFO: rc: 1 May 20 22:05:32.812: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5647 exec execpod5j4kg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30986: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30986 nc: connect to 10.10.190.207 port 30986 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:32.813: FAIL: Unexpected error: <*errors.errorString | 0xc00217e630>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30986 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30986 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func24.15() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1351 +0x358 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001988300) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc001988300) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc001988300, 0x70f99e8) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 May 20 22:05:32.814: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-5647". STEP: Found 17 events. May 20 22:05:32.841: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpod5j4kg: { } Scheduled: Successfully assigned services-5647/execpod5j4kg to node2 May 20 22:05:32.841: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for externalname-service-hbmm6: { } Scheduled: Successfully assigned services-5647/externalname-service-hbmm6 to node2 May 20 22:05:32.841: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for externalname-service-wmrc8: { } Scheduled: Successfully assigned services-5647/externalname-service-wmrc8 to node1 May 20 22:05:32.841: INFO: At 2022-05-20 22:03:10 +0000 UTC - event for externalname-service: {replication-controller } SuccessfulCreate: Created pod: externalname-service-wmrc8 May 20 22:05:32.841: INFO: At 2022-05-20 22:03:10 +0000 UTC - event for externalname-service: {replication-controller } SuccessfulCreate: Created pod: externalname-service-hbmm6 May 20 22:05:32.841: INFO: At 2022-05-20 22:03:13 +0000 UTC - event for externalname-service-wmrc8: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" May 20 22:05:32.841: INFO: At 2022-05-20 22:03:13 +0000 UTC - event for externalname-service-wmrc8: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 371.049216ms May 20 22:05:32.841: INFO: At 2022-05-20 22:03:14 +0000 UTC - event for externalname-service-hbmm6: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 464.134346ms May 20 22:05:32.841: INFO: At 2022-05-20 22:03:14 +0000 UTC - event for externalname-service-hbmm6: {kubelet node2} Created: Created container externalname-service May 20 22:05:32.841: INFO: At 2022-05-20 22:03:14 +0000 UTC - event for externalname-service-hbmm6: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" May 20 22:05:32.841: INFO: At 2022-05-20 22:03:14 +0000 UTC - event for externalname-service-wmrc8: {kubelet node1} Started: Started container externalname-service May 20 22:05:32.841: INFO: At 2022-05-20 22:03:14 +0000 UTC - event for externalname-service-wmrc8: {kubelet node1} Created: Created container externalname-service May 20 22:05:32.841: INFO: At 2022-05-20 22:03:15 +0000 UTC - event for externalname-service-hbmm6: {kubelet node2} Started: Started container externalname-service May 20 22:05:32.841: INFO: At 2022-05-20 22:03:24 +0000 UTC - event for execpod5j4kg: {kubelet node2} Created: Created container agnhost-container May 20 22:05:32.841: INFO: At 2022-05-20 22:03:24 +0000 UTC - event for execpod5j4kg: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" May 20 22:05:32.841: INFO: At 2022-05-20 22:03:24 +0000 UTC - event for execpod5j4kg: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 309.682854ms May 20 22:05:32.841: INFO: At 2022-05-20 22:03:25 +0000 UTC - event for execpod5j4kg: {kubelet node2} Started: Started container agnhost-container May 20 22:05:32.844: INFO: POD NODE PHASE GRACE CONDITIONS May 20 22:05:32.844: INFO: execpod5j4kg node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:03:22 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:03:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:03:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:03:22 +0000 UTC }] May 20 22:05:32.844: INFO: externalname-service-hbmm6 node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:03:10 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:03:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:03:16 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:03:10 +0000 UTC }] May 20 22:05:32.844: INFO: externalname-service-wmrc8 node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:03:10 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:03:14 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:03:14 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:03:10 +0000 UTC }] May 20 22:05:32.844: INFO: May 20 22:05:32.848: INFO: Logging node info for node master1 May 20 22:05:32.850: INFO: Node Info: &Node{ObjectMeta:{master1 b016dcf2-74b7-4456-916a-8ca363b9ccc3 39177 0 2022-05-20 20:01:28 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-05-20 20:01:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2022-05-20 20:01:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2022-05-20 20:04:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2022-05-20 20:09:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {nfd-master Update v1 2022-05-20 20:12:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-20 20:07:07 +0000 UTC,LastTransitionTime:2022-05-20 20:07:07 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-20 22:05:24 +0000 UTC,LastTransitionTime:2022-05-20 20:01:26 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-20 22:05:24 +0000 UTC,LastTransitionTime:2022-05-20 20:01:26 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-20 22:05:24 +0000 UTC,LastTransitionTime:2022-05-20 20:01:26 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-20 22:05:24 +0000 UTC,LastTransitionTime:2022-05-20 20:04:22 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e9847a94929d4465bdf672fd6e82b77d,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:a01e5bd5-a73c-4ab6-b80a-cab509b05bc6,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727687197,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:f65735add9b770eec74999948d1a43963106c14a89579d0158e1ec3a1bae070e tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 20 22:05:32.851: INFO: Logging kubelet events for node master1 May 20 22:05:32.854: INFO: Logging pods the kubelet thinks is on node master1 May 20 22:05:32.877: INFO: kube-controller-manager-master1 started at 2022-05-20 20:10:37 +0000 UTC (0+1 container statuses recorded) May 20 22:05:32.877: INFO: Container kube-controller-manager ready: true, restart count 3 May 20 22:05:32.877: INFO: kube-proxy-rgxh2 started at 2022-05-20 20:03:14 +0000 UTC (0+1 container statuses recorded) May 20 22:05:32.877: INFO: Container kube-proxy ready: true, restart count 2 May 20 22:05:32.877: INFO: kube-flannel-tzq8g started at 2022-05-20 20:04:10 +0000 UTC (1+1 container statuses recorded) May 20 22:05:32.877: INFO: Init container install-cni ready: true, restart count 2 May 20 22:05:32.877: INFO: Container kube-flannel ready: true, restart count 1 May 20 22:05:32.877: INFO: node-feature-discovery-controller-cff799f9f-nq7tc started at 2022-05-20 20:11:58 +0000 UTC (0+1 container statuses recorded) May 20 22:05:32.877: INFO: Container nfd-controller ready: true, restart count 0 May 20 22:05:32.877: INFO: node-exporter-4rvrg started at 2022-05-20 20:17:21 +0000 UTC (0+2 container statuses recorded) May 20 22:05:32.877: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 20 22:05:32.877: INFO: Container node-exporter ready: true, restart count 0 May 20 22:05:32.877: INFO: kube-scheduler-master1 started at 2022-05-20 20:20:27 +0000 UTC (0+1 container statuses recorded) May 20 22:05:32.877: INFO: Container kube-scheduler ready: true, restart count 1 May 20 22:05:32.877: INFO: kube-apiserver-master1 started at 2022-05-20 20:02:32 +0000 UTC (0+1 container statuses recorded) May 20 22:05:32.877: INFO: Container kube-apiserver ready: true, restart count 0 May 20 22:05:32.877: INFO: kube-multus-ds-amd64-k8cb6 started at 2022-05-20 20:04:18 +0000 UTC (0+1 container statuses recorded) May 20 22:05:32.877: INFO: Container kube-multus ready: true, restart count 1 May 20 22:05:32.877: INFO: container-registry-65d7c44b96-n94w5 started at 2022-05-20 20:08:47 +0000 UTC (0+2 container statuses recorded) May 20 22:05:32.877: INFO: Container docker-registry ready: true, restart count 0 May 20 22:05:32.877: INFO: Container nginx ready: true, restart count 0 May 20 22:05:32.877: INFO: prometheus-operator-585ccfb458-bl62n started at 2022-05-20 20:17:13 +0000 UTC (0+2 container statuses recorded) May 20 22:05:32.877: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 20 22:05:32.877: INFO: Container prometheus-operator ready: true, restart count 0 May 20 22:05:32.965: INFO: Latency metrics for node master1 May 20 22:05:32.965: INFO: Logging node info for node master2 May 20 22:05:32.968: INFO: Node Info: &Node{ObjectMeta:{master2 ddc04b08-e43a-4e18-a612-aa3bf7f8411e 39182 0 2022-05-20 20:01:56 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-05-20 20:01:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-05-20 20:04:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-05-20 20:04:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-05-20 20:14:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-20 20:07:03 +0000 UTC,LastTransitionTime:2022-05-20 20:07:03 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-20 22:05:24 +0000 UTC,LastTransitionTime:2022-05-20 20:01:56 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-20 22:05:24 +0000 UTC,LastTransitionTime:2022-05-20 20:01:56 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-20 22:05:24 +0000 UTC,LastTransitionTime:2022-05-20 20:01:56 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-20 22:05:24 +0000 UTC,LastTransitionTime:2022-05-20 20:04:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:63d829bfe81540169bcb84ee465e884a,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:fc4aead3-0f07-477a-9f91-3902c50ddf48,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727687197,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 20 22:05:32.968: INFO: Logging kubelet events for node master2 May 20 22:05:32.971: INFO: Logging pods the kubelet thinks is on node master2 May 20 22:05:32.980: INFO: kube-multus-ds-amd64-97fkc started at 2022-05-20 20:04:18 +0000 UTC (0+1 container statuses recorded) May 20 22:05:32.981: INFO: Container kube-multus ready: true, restart count 1 May 20 22:05:32.981: INFO: kube-scheduler-master2 started at 2022-05-20 20:02:34 +0000 UTC (0+1 container statuses recorded) May 20 22:05:32.981: INFO: Container kube-scheduler ready: true, restart count 3 May 20 22:05:32.981: INFO: kube-controller-manager-master2 started at 2022-05-20 20:10:36 +0000 UTC (0+1 container statuses recorded) May 20 22:05:32.981: INFO: Container kube-controller-manager ready: true, restart count 2 May 20 22:05:32.981: INFO: kube-proxy-wfzg2 started at 2022-05-20 20:03:14 +0000 UTC (0+1 container statuses recorded) May 20 22:05:32.981: INFO: Container kube-proxy ready: true, restart count 1 May 20 22:05:32.981: INFO: kube-flannel-wj7hl started at 2022-05-20 20:04:10 +0000 UTC (1+1 container statuses recorded) May 20 22:05:32.981: INFO: Init container install-cni ready: true, restart count 2 May 20 22:05:32.981: INFO: Container kube-flannel ready: true, restart count 1 May 20 22:05:32.981: INFO: coredns-8474476ff8-tjnfw started at 2022-05-20 20:04:46 +0000 UTC (0+1 container statuses recorded) May 20 22:05:32.981: INFO: Container coredns ready: true, restart count 1 May 20 22:05:32.981: INFO: dns-autoscaler-7df78bfcfb-5qj9t started at 2022-05-20 20:04:48 +0000 UTC (0+1 container statuses recorded) May 20 22:05:32.981: INFO: Container autoscaler ready: true, restart count 1 May 20 22:05:32.981: INFO: node-exporter-jfg4p started at 2022-05-20 20:17:20 +0000 UTC (0+2 container statuses recorded) May 20 22:05:32.981: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 20 22:05:32.981: INFO: Container node-exporter ready: true, restart count 0 May 20 22:05:32.981: INFO: kube-apiserver-master2 started at 2022-05-20 20:02:34 +0000 UTC (0+1 container statuses recorded) May 20 22:05:32.981: INFO: Container kube-apiserver ready: true, restart count 0 May 20 22:05:33.071: INFO: Latency metrics for node master2 May 20 22:05:33.071: INFO: Logging node info for node master3 May 20 22:05:33.073: INFO: Node Info: &Node{ObjectMeta:{master3 f42c1bd6-d828-4857-9180-56c73dcc370f 39190 0 2022-05-20 20:02:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-05-20 20:02:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-05-20 20:04:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-05-20 20:04:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-05-20 20:14:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-20 20:07:09 +0000 UTC,LastTransitionTime:2022-05-20 20:07:09 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-20 22:05:25 +0000 UTC,LastTransitionTime:2022-05-20 20:02:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-20 22:05:25 +0000 UTC,LastTransitionTime:2022-05-20 20:02:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-20 22:05:25 +0000 UTC,LastTransitionTime:2022-05-20 20:02:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-20 22:05:25 +0000 UTC,LastTransitionTime:2022-05-20 20:04:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6a2131d65a6f41c3b857ed7d5f7d9f9f,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:2fa6d1c6-058c-482a-97f3-d7e9e817b36a,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727687197,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 20 22:05:33.074: INFO: Logging kubelet events for node master3 May 20 22:05:33.075: INFO: Logging pods the kubelet thinks is on node master3 May 20 22:05:33.083: INFO: kube-apiserver-master3 started at 2022-05-20 20:02:05 +0000 UTC (0+1 container statuses recorded) May 20 22:05:33.083: INFO: Container kube-apiserver ready: true, restart count 0 May 20 22:05:33.083: INFO: kube-multus-ds-amd64-ch8bd started at 2022-05-20 20:04:18 +0000 UTC (0+1 container statuses recorded) May 20 22:05:33.083: INFO: Container kube-multus ready: true, restart count 1 May 20 22:05:33.083: INFO: coredns-8474476ff8-4szxh started at 2022-05-20 20:04:50 +0000 UTC (0+1 container statuses recorded) May 20 22:05:33.083: INFO: Container coredns ready: true, restart count 1 May 20 22:05:33.083: INFO: node-exporter-zgxkr started at 2022-05-20 20:17:20 +0000 UTC (0+2 container statuses recorded) May 20 22:05:33.083: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 20 22:05:33.083: INFO: Container node-exporter ready: true, restart count 0 May 20 22:05:33.083: INFO: kube-controller-manager-master3 started at 2022-05-20 20:10:36 +0000 UTC (0+1 container statuses recorded) May 20 22:05:33.083: INFO: Container kube-controller-manager ready: true, restart count 1 May 20 22:05:33.083: INFO: kube-scheduler-master3 started at 2022-05-20 20:02:33 +0000 UTC (0+1 container statuses recorded) May 20 22:05:33.083: INFO: Container kube-scheduler ready: true, restart count 2 May 20 22:05:33.083: INFO: kube-proxy-rsqzq started at 2022-05-20 20:03:14 +0000 UTC (0+1 container statuses recorded) May 20 22:05:33.083: INFO: Container kube-proxy ready: true, restart count 2 May 20 22:05:33.083: INFO: kube-flannel-bwb5w started at 2022-05-20 20:04:10 +0000 UTC (1+1 container statuses recorded) May 20 22:05:33.083: INFO: Init container install-cni ready: true, restart count 0 May 20 22:05:33.083: INFO: Container kube-flannel ready: true, restart count 2 May 20 22:05:33.161: INFO: Latency metrics for node master3 May 20 22:05:33.161: INFO: Logging node info for node node1 May 20 22:05:33.163: INFO: Node Info: &Node{ObjectMeta:{node1 65c381dd-b6f5-4e67-a327-7a45366d15af 39362 0 2022-05-20 20:03:10 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-20 20:03:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-05-20 20:03:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-05-20 20:04:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-05-20 20:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-05-20 20:15:36 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-05-20 20:15:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-20 20:07:03 +0000 UTC,LastTransitionTime:2022-05-20 20:07:03 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-20 22:05:29 +0000 UTC,LastTransitionTime:2022-05-20 20:03:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-20 22:05:29 +0000 UTC,LastTransitionTime:2022-05-20 20:03:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-20 22:05:29 +0000 UTC,LastTransitionTime:2022-05-20 20:03:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-20 22:05:29 +0000 UTC,LastTransitionTime:2022-05-20 20:04:16 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f2f0a31e38e446cda6cf4c679d8a2ef5,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:c988afd2-8149-4515-9a6f-832552c2ed2d,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003977757,},ContainerImage{Names:[localhost:30500/cmk@sha256:1b6fdb10d02a95904d28fbec7317b3044b913b4572405caf5a5b4f305481ce37 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727687197,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:bcea5fd975bec7f8eb179f896b3a007090d081bd13d974bdb01eedd94cdd88b1 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 20 22:05:33.164: INFO: Logging kubelet events for node node1 May 20 22:05:33.166: INFO: Logging pods the kubelet thinks is on node node1 May 20 22:05:33.183: INFO: kube-proxy-v8kzq started at 2022-05-20 20:03:14 +0000 UTC (0+1 container statuses recorded) May 20 22:05:33.183: INFO: Container kube-proxy ready: true, restart count 2 May 20 22:05:33.183: INFO: cmk-c5x47 started at 2022-05-20 20:16:15 +0000 UTC (0+2 container statuses recorded) May 20 22:05:33.183: INFO: Container nodereport ready: true, restart count 0 May 20 22:05:33.183: INFO: Container reconcile ready: true, restart count 0 May 20 22:05:33.183: INFO: webserver-deployment-847dcfb7fb-xt6t6 started at 2022-05-20 22:05:22 +0000 UTC (0+1 container statuses recorded) May 20 22:05:33.183: INFO: Container httpd ready: true, restart count 0 May 20 22:05:33.183: INFO: webserver-deployment-847dcfb7fb-ng6b2 started at 2022-05-20 22:05:22 +0000 UTC (0+1 container statuses recorded) May 20 22:05:33.183: INFO: Container httpd ready: true, restart count 0 May 20 22:05:33.183: INFO: webserver-deployment-847dcfb7fb-bkvxm started at 2022-05-20 22:05:22 +0000 UTC (0+1 container statuses recorded) May 20 22:05:33.183: INFO: Container httpd ready: true, restart count 0 May 20 22:05:33.183: INFO: pod-with-poststart-exec-hook started at 2022-05-20 22:05:25 +0000 UTC (0+1 container statuses recorded) May 20 22:05:33.183: INFO: Container pod-with-poststart-exec-hook ready: true, restart count 0 May 20 22:05:33.184: INFO: kube-multus-ds-amd64-krd6m started at 2022-05-20 20:04:18 +0000 UTC (0+1 container statuses recorded) May 20 22:05:33.184: INFO: Container kube-multus ready: true, restart count 1 May 20 22:05:33.184: INFO: kubernetes-dashboard-785dcbb76d-6c2f8 started at 2022-05-20 20:04:50 +0000 UTC (0+1 container statuses recorded) May 20 22:05:33.184: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 20 22:05:33.184: INFO: webserver-deployment-847dcfb7fb-4vttp started at 2022-05-20 22:05:22 +0000 UTC (0+1 container statuses recorded) May 20 22:05:33.184: INFO: Container httpd ready: true, restart count 0 May 20 22:05:33.184: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-qn9gl started at 2022-05-20 20:13:08 +0000 UTC (0+1 container statuses recorded) May 20 22:05:33.184: INFO: Container kube-sriovdp ready: true, restart count 0 May 20 22:05:33.184: INFO: node-exporter-czwvh started at 2022-05-20 20:17:20 +0000 UTC (0+2 container statuses recorded) May 20 22:05:33.184: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 20 22:05:33.184: INFO: Container node-exporter ready: true, restart count 0 May 20 22:05:33.184: INFO: busybox-f53e4789-dd1e-4225-b414-0de17c36b8d8 started at 2022-05-20 22:03:16 +0000 UTC (0+1 container statuses recorded) May 20 22:05:33.184: INFO: Container busybox ready: true, restart count 0 May 20 22:05:33.184: INFO: nginx-proxy-node1 started at 2022-05-20 20:06:57 +0000 UTC (0+1 container statuses recorded) May 20 22:05:33.184: INFO: Container nginx-proxy ready: true, restart count 2 May 20 22:05:33.184: INFO: prometheus-k8s-0 started at 2022-05-20 20:17:30 +0000 UTC (0+4 container statuses recorded) May 20 22:05:33.184: INFO: Container config-reloader ready: true, restart count 0 May 20 22:05:33.184: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 20 22:05:33.184: INFO: Container grafana ready: true, restart count 0 May 20 22:05:33.184: INFO: Container prometheus ready: true, restart count 1 May 20 22:05:33.184: INFO: pod-handle-http-request started at 2022-05-20 22:05:17 +0000 UTC (0+1 container statuses recorded) May 20 22:05:33.184: INFO: Container agnhost-container ready: true, restart count 0 May 20 22:05:33.184: INFO: busybox-readonly-fs95235c0a-e20a-4c44-a31c-4586b0e45df2 started at 2022-05-20 22:05:18 +0000 UTC (0+1 container statuses recorded) May 20 22:05:33.184: INFO: Container busybox-readonly-fs95235c0a-e20a-4c44-a31c-4586b0e45df2 ready: true, restart count 0 May 20 22:05:33.184: INFO: collectd-875j8 started at 2022-05-20 20:21:17 +0000 UTC (0+3 container statuses recorded) May 20 22:05:33.184: INFO: Container collectd ready: true, restart count 0 May 20 22:05:33.184: INFO: Container collectd-exporter ready: true, restart count 0 May 20 22:05:33.184: INFO: Container rbac-proxy ready: true, restart count 0 May 20 22:05:33.184: INFO: affinity-nodeport-8f96d started at 2022-05-20 22:05:01 +0000 UTC (0+1 container statuses recorded) May 20 22:05:33.184: INFO: Container affinity-nodeport ready: true, restart count 0 May 20 22:05:33.184: INFO: affinity-nodeport-zl64b started at 2022-05-20 22:05:01 +0000 UTC (0+1 container statuses recorded) May 20 22:05:33.184: INFO: Container affinity-nodeport ready: true, restart count 0 May 20 22:05:33.184: INFO: node-feature-discovery-worker-rh55h started at 2022-05-20 20:11:58 +0000 UTC (0+1 container statuses recorded) May 20 22:05:33.184: INFO: Container nfd-worker ready: true, restart count 0 May 20 22:05:33.184: INFO: cmk-init-discover-node1-vkzkd started at 2022-05-20 20:15:33 +0000 UTC (0+3 container statuses recorded) May 20 22:05:33.184: INFO: Container discover ready: false, restart count 0 May 20 22:05:33.184: INFO: Container init ready: false, restart count 0 May 20 22:05:33.184: INFO: Container install ready: false, restart count 0 May 20 22:05:33.184: INFO: affinity-nodeport-transition-lvnqj started at 2022-05-20 22:05:17 +0000 UTC (0+1 container statuses recorded) May 20 22:05:33.184: INFO: Container affinity-nodeport-transition ready: true, restart count 0 May 20 22:05:33.184: INFO: webserver-deployment-847dcfb7fb-x6zj7 started at 2022-05-20 22:05:22 +0000 UTC (0+1 container statuses recorded) May 20 22:05:33.184: INFO: Container httpd ready: true, restart count 0 May 20 22:05:33.184: INFO: kube-flannel-2blt7 started at 2022-05-20 20:04:10 +0000 UTC (1+1 container statuses recorded) May 20 22:05:33.184: INFO: Init container install-cni ready: true, restart count 2 May 20 22:05:33.184: INFO: Container kube-flannel ready: true, restart count 3 May 20 22:05:33.184: INFO: execpod-affinitysr8d6 started at 2022-05-20 22:05:07 +0000 UTC (0+1 container statuses recorded) May 20 22:05:33.184: INFO: Container agnhost-container ready: true, restart count 0 May 20 22:05:33.184: INFO: externalname-service-wmrc8 started at 2022-05-20 22:03:10 +0000 UTC (0+1 container statuses recorded) May 20 22:05:33.184: INFO: Container externalname-service ready: true, restart count 0 May 20 22:05:33.533: INFO: Latency metrics for node node1 May 20 22:05:33.533: INFO: Logging node info for node node2 May 20 22:05:33.537: INFO: Node Info: &Node{ObjectMeta:{node2 a0e0a426-876d-4419-96e4-c6977ef3393c 39215 0 2022-05-20 20:03:09 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-20 20:03:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-05-20 20:03:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-05-20 20:04:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-05-20 20:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-05-20 20:15:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-05-20 20:16:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-20 20:07:03 +0000 UTC,LastTransitionTime:2022-05-20 20:07:03 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-20 22:05:26 +0000 UTC,LastTransitionTime:2022-05-20 20:03:09 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-20 22:05:26 +0000 UTC,LastTransitionTime:2022-05-20 20:03:09 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-20 22:05:26 +0000 UTC,LastTransitionTime:2022-05-20 20:03:09 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-20 22:05:26 +0000 UTC,LastTransitionTime:2022-05-20 20:07:08 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a6deb87c5d6d4ca89be50c8f447a0e3c,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:67af2183-25fe-4024-95ea-e80edf7c8695,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727687197,},ContainerImage{Names:[localhost:30500/cmk@sha256:1b6fdb10d02a95904d28fbec7317b3044b913b4572405caf5a5b4f305481ce37 localhost:30500/cmk:v1.5.1],SizeBytes:727687197,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:bcea5fd975bec7f8eb179f896b3a007090d081bd13d974bdb01eedd94cdd88b1 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:f65735add9b770eec74999948d1a43963106c14a89579d0158e1ec3a1bae070e localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 20 22:05:33.538: INFO: Logging kubelet events for node node2 May 20 22:05:33.540: INFO: Logging pods the kubelet thinks is on node node2 May 20 22:05:33.556: INFO: pod-2 started at 2022-05-20 22:05:16 +0000 UTC (0+1 container statuses recorded) May 20 22:05:33.556: INFO: Container donothing ready: true, restart count 0 May 20 22:05:33.556: INFO: labelsupdate6cb75caa-018c-4efc-9fe1-807d9fa3ea75 started at 2022-05-20 22:05:24 +0000 UTC (0+1 container statuses recorded) May 20 22:05:33.556: INFO: Container client-container ready: false, restart count 0 May 20 22:05:33.556: INFO: execpod-affinityrptgl started at 2022-05-20 22:05:29 +0000 UTC (0+1 container statuses recorded) May 20 22:05:33.556: INFO: Container agnhost-container ready: false, restart count 0 May 20 22:05:33.556: INFO: pod-exec-websocket-a9b1c5c1-734c-4866-80a8-42b04e8a4a96 started at 2022-05-20 22:05:07 +0000 UTC (0+1 container statuses recorded) May 20 22:05:33.556: INFO: Container main ready: true, restart count 0 May 20 22:05:33.556: INFO: node-feature-discovery-worker-nphk9 started at 2022-05-20 20:11:58 +0000 UTC (0+1 container statuses recorded) May 20 22:05:33.556: INFO: Container nfd-worker ready: true, restart count 0 May 20 22:05:33.556: INFO: externalname-service-hbmm6 started at 2022-05-20 22:03:10 +0000 UTC (0+1 container statuses recorded) May 20 22:05:33.556: INFO: Container externalname-service ready: true, restart count 0 May 20 22:05:33.557: INFO: node-exporter-vm24n started at 2022-05-20 20:17:20 +0000 UTC (0+2 container statuses recorded) May 20 22:05:33.557: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 20 22:05:33.557: INFO: Container node-exporter ready: true, restart count 0 May 20 22:05:33.557: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-wl7nk started at 2022-05-20 20:13:08 +0000 UTC (0+1 container statuses recorded) May 20 22:05:33.557: INFO: Container kube-sriovdp ready: true, restart count 0 May 20 22:05:33.557: INFO: cmk-9hxtl started at 2022-05-20 20:16:16 +0000 UTC (0+2 container statuses recorded) May 20 22:05:33.557: INFO: Container nodereport ready: true, restart count 0 May 20 22:05:33.557: INFO: Container reconcile ready: true, restart count 0 May 20 22:05:33.557: INFO: webserver-deployment-847dcfb7fb-l96x8 started at 2022-05-20 22:05:22 +0000 UTC (0+1 container statuses recorded) May 20 22:05:33.557: INFO: Container httpd ready: true, restart count 0 May 20 22:05:33.557: INFO: webserver-deployment-847dcfb7fb-vx7dn started at 2022-05-20 22:05:22 +0000 UTC (0+1 container statuses recorded) May 20 22:05:33.557: INFO: Container httpd ready: false, restart count 0 May 20 22:05:33.557: INFO: webserver-deployment-847dcfb7fb-bhwd2 started at 2022-05-20 22:05:22 +0000 UTC (0+1 container statuses recorded) May 20 22:05:33.557: INFO: Container httpd ready: false, restart count 0 May 20 22:05:33.557: INFO: execpod5j4kg started at 2022-05-20 22:03:22 +0000 UTC (0+1 container statuses recorded) May 20 22:05:33.557: INFO: Container agnhost-container ready: true, restart count 0 May 20 22:05:33.557: INFO: cmk-webhook-6c9d5f8578-5kbbc started at 2022-05-20 20:16:16 +0000 UTC (0+1 container statuses recorded) May 20 22:05:33.557: INFO: Container cmk-webhook ready: true, restart count 0 May 20 22:05:33.557: INFO: tas-telemetry-aware-scheduling-84ff454dfb-ddzzd started at 2022-05-20 20:20:26 +0000 UTC (0+1 container statuses recorded) May 20 22:05:33.557: INFO: Container tas-extender ready: true, restart count 0 May 20 22:05:33.557: INFO: affinity-nodeport-transition-mnvzn started at 2022-05-20 22:05:17 +0000 UTC (0+1 container statuses recorded) May 20 22:05:33.557: INFO: Container affinity-nodeport-transition ready: true, restart count 0 May 20 22:05:33.557: INFO: affinity-nodeport-transition-cvbv6 started at 2022-05-20 22:05:17 +0000 UTC (0+1 container statuses recorded) May 20 22:05:33.557: INFO: Container affinity-nodeport-transition ready: true, restart count 0 May 20 22:05:33.557: INFO: webserver-deployment-847dcfb7fb-gbphs started at 2022-05-20 22:05:22 +0000 UTC (0+1 container statuses recorded) May 20 22:05:33.557: INFO: Container httpd ready: true, restart count 0 May 20 22:05:33.557: INFO: kube-multus-ds-amd64-p22zp started at 2022-05-20 20:04:18 +0000 UTC (0+1 container statuses recorded) May 20 22:05:33.557: INFO: Container kube-multus ready: true, restart count 1 May 20 22:05:33.557: INFO: kubernetes-metrics-scraper-5558854cb-66r9g started at 2022-05-20 20:04:50 +0000 UTC (0+1 container statuses recorded) May 20 22:05:33.557: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 20 22:05:33.557: INFO: webserver-deployment-847dcfb7fb-f679q started at 2022-05-20 22:05:22 +0000 UTC (0+1 container statuses recorded) May 20 22:05:33.557: INFO: Container httpd ready: true, restart count 0 May 20 22:05:33.557: INFO: affinity-nodeport-rr9kn started at 2022-05-20 22:05:01 +0000 UTC (0+1 container statuses recorded) May 20 22:05:33.557: INFO: Container affinity-nodeport ready: true, restart count 0 May 20 22:05:33.557: INFO: cmk-init-discover-node2-b7gw4 started at 2022-05-20 20:15:53 +0000 UTC (0+3 container statuses recorded) May 20 22:05:33.557: INFO: Container discover ready: false, restart count 0 May 20 22:05:33.557: INFO: Container init ready: false, restart count 0 May 20 22:05:33.557: INFO: Container install ready: false, restart count 0 May 20 22:05:33.557: INFO: collectd-h4pzk started at 2022-05-20 20:21:17 +0000 UTC (0+3 container statuses recorded) May 20 22:05:33.557: INFO: Container collectd ready: true, restart count 0 May 20 22:05:33.557: INFO: Container collectd-exporter ready: true, restart count 0 May 20 22:05:33.557: INFO: Container rbac-proxy ready: true, restart count 0 May 20 22:05:33.557: INFO: kube-flannel-jpmpd started at 2022-05-20 20:04:10 +0000 UTC (1+1 container statuses recorded) May 20 22:05:33.557: INFO: Init container install-cni ready: true, restart count 1 May 20 22:05:33.557: INFO: Container kube-flannel ready: true, restart count 2 May 20 22:05:33.557: INFO: my-hostname-basic-7fded5f7-fa8f-493e-b1a6-d809b4215f07-lw462 started at 2022-05-20 22:05:22 +0000 UTC (0+1 container statuses recorded) May 20 22:05:33.557: INFO: Container my-hostname-basic-7fded5f7-fa8f-493e-b1a6-d809b4215f07 ready: true, restart count 0 May 20 22:05:33.557: INFO: nginx-proxy-node2 started at 2022-05-20 20:03:09 +0000 UTC (0+1 container statuses recorded) May 20 22:05:33.557: INFO: Container nginx-proxy ready: true, restart count 2 May 20 22:05:33.557: INFO: kube-proxy-rg2fp started at 2022-05-20 20:03:14 +0000 UTC (0+1 container statuses recorded) May 20 22:05:33.557: INFO: Container kube-proxy ready: true, restart count 2 May 20 22:05:33.815: INFO: Latency metrics for node node2 May 20 22:05:33.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5647" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [143.156 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 20 22:05:32.813: Unexpected error: <*errors.errorString | 0xc00217e630>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30986 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30986 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1351 ------------------------------ {"msg":"FAILED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":-1,"completed":10,"skipped":221,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:05:17.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. May 20 22:05:17.797: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) May 20 22:05:19.801: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) May 20 22:05:21.801: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) May 20 22:05:23.803: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) May 20 22:05:25.803: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the pod with lifecycle hook May 20 22:05:25.818: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) May 20 22:05:27.822: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) May 20 22:05:29.823: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) May 20 22:05:31.821: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = true) STEP: check poststart hook STEP: delete the pod with lifecycle hook May 20 22:05:31.833: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 20 22:05:31.836: INFO: Pod pod-with-poststart-exec-hook still exists May 20 22:05:33.837: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 20 22:05:33.841: INFO: Pod pod-with-poststart-exec-hook still exists May 20 22:05:35.836: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 20 22:05:35.839: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:05:35.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5563" for this suite. • [SLOW TEST:18.084 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:05:24.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating the pod May 20 22:05:24.699: INFO: The status of Pod labelsupdate6cb75caa-018c-4efc-9fe1-807d9fa3ea75 is Pending, waiting for it to be Running (with Ready = true) May 20 22:05:26.704: INFO: The status of Pod labelsupdate6cb75caa-018c-4efc-9fe1-807d9fa3ea75 is Pending, waiting for it to be Running (with Ready = true) May 20 22:05:28.703: INFO: The status of Pod labelsupdate6cb75caa-018c-4efc-9fe1-807d9fa3ea75 is Pending, waiting for it to be Running (with Ready = true) May 20 22:05:30.704: INFO: The status of Pod labelsupdate6cb75caa-018c-4efc-9fe1-807d9fa3ea75 is Pending, waiting for it to be Running (with Ready = true) May 20 22:05:32.703: INFO: The status of Pod labelsupdate6cb75caa-018c-4efc-9fe1-807d9fa3ea75 is Pending, waiting for it to be Running (with Ready = true) May 20 22:05:34.706: INFO: The status of Pod labelsupdate6cb75caa-018c-4efc-9fe1-807d9fa3ea75 is Running (Ready = true) May 20 22:05:35.224: INFO: Successfully updated pod "labelsupdate6cb75caa-018c-4efc-9fe1-807d9fa3ea75" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:05:37.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3588" for this suite. • [SLOW TEST:12.655 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":287,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:05:22.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 20 22:05:22.343: INFO: Creating deployment "webserver-deployment" May 20 22:05:22.347: INFO: Waiting for observed generation 1 May 20 22:05:24.352: INFO: Waiting for all required pods to come up May 20 22:05:24.357: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running May 20 22:05:36.364: INFO: Waiting for deployment "webserver-deployment" to complete May 20 22:05:36.369: INFO: Updating deployment "webserver-deployment" with a non-existent image May 20 22:05:36.375: INFO: Updating deployment webserver-deployment May 20 22:05:36.376: INFO: Waiting for observed generation 2 May 20 22:05:38.381: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 20 22:05:38.383: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 20 22:05:38.385: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 20 22:05:38.392: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 20 22:05:38.392: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 20 22:05:38.394: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 20 22:05:38.400: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas May 20 22:05:38.400: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 May 20 22:05:38.406: INFO: Updating deployment webserver-deployment May 20 22:05:38.407: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas May 20 22:05:38.411: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 20 22:05:38.413: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 May 20 22:05:38.418: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-7249 2fc556f5-c002-49f8-a180-1100ea5bfae4 39631 3 2022-05-20 22:05:22 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2022-05-20 22:05:22 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2022-05-20 22:05:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004fdd9b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:5,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-795d758f88" is progressing.,LastUpdateTime:2022-05-20 22:05:36 +0000 UTC,LastTransitionTime:2022-05-20 22:05:22 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2022-05-20 22:05:38 +0000 UTC,LastTransitionTime:2022-05-20 22:05:38 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} May 20 22:05:38.421: INFO: New ReplicaSet "webserver-deployment-795d758f88" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-795d758f88 deployment-7249 26488bb2-a0fc-4c51-923c-e29e3e6076fe 39628 3 2022-05-20 22:05:36 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 2fc556f5-c002-49f8-a180-1100ea5bfae4 0xc004fddda7 0xc004fddda8}] [] [{kube-controller-manager Update apps/v1 2022-05-20 22:05:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2fc556f5-c002-49f8-a180-1100ea5bfae4\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 795d758f88,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004fdde28 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 20 22:05:38.421: INFO: All old ReplicaSets of Deployment "webserver-deployment": May 20 22:05:38.421: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-847dcfb7fb deployment-7249 494cac70-eaca-40dc-a346-47f62914a00e 39626 3 2022-05-20 22:05:22 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 2fc556f5-c002-49f8-a180-1100ea5bfae4 0xc004fdde87 0xc004fdde88}] [] [{kube-controller-manager Update apps/v1 2022-05-20 22:05:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2fc556f5-c002-49f8-a180-1100ea5bfae4\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 847dcfb7fb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004fddef8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} May 20 22:05:38.427: INFO: Pod "webserver-deployment-795d758f88-crsf6" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-crsf6 webserver-deployment-795d758f88- deployment-7249 4094842b-981e-4720-b584-735a543d913f 39611 0 2022-05-20 22:05:36 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.237" ], "mac": "06:1a:89:33:a2:45", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.237" ], "mac": "06:1a:89:33:a2:45", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 26488bb2-a0fc-4c51-923c-e29e3e6076fe 0xc0037bbadf 0xc0037bbaf0}] [] [{kube-controller-manager Update v1 2022-05-20 22:05:36 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26488bb2-a0fc-4c51-923c-e29e3e6076fe\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-05-20 22:05:36 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:hostIP":{},"f:startTime":{}}}} {multus Update v1 2022-05-20 22:05:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}},"f:status":{"f:containerStatuses":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-zmjlf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zmjlf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:05:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:05:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:05:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:05:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:,StartTime:2022-05-20 22:05:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:nil,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 22:05:38.428: INFO: Pod "webserver-deployment-795d758f88-p8ft6" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-p8ft6 webserver-deployment-795d758f88- deployment-7249 47b3311f-1458-4f66-8e53-39409108babb 39585 0 2022-05-20 22:05:36 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 26488bb2-a0fc-4c51-923c-e29e3e6076fe 0xc0037bbcdf 0xc0037bbcf0}] [] [{kube-controller-manager Update v1 2022-05-20 22:05:36 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26488bb2-a0fc-4c51-923c-e29e3e6076fe\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-05-20 22:05:36 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-2lj45,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2lj45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:05:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:05:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:05:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:05:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:,StartTime:2022-05-20 22:05:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 22:05:38.428: INFO: Pod "webserver-deployment-795d758f88-ph2kv" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-ph2kv webserver-deployment-795d758f88- deployment-7249 02ddf30e-7e2d-479d-8d8e-32bbb2589d54 39636 0 2022-05-20 22:05:38 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 26488bb2-a0fc-4c51-923c-e29e3e6076fe 0xc0037bbebf 0xc0037bbed0}] [] [{kube-controller-manager Update v1 2022-05-20 22:05:38 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26488bb2-a0fc-4c51-923c-e29e3e6076fe\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-bb6r5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bb6r5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:05:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 22:05:38.428: INFO: Pod "webserver-deployment-795d758f88-r6489" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-r6489 webserver-deployment-795d758f88- deployment-7249 1cc83f9a-0bfe-4409-a288-7ccdf223b7c5 39555 0 2022-05-20 22:05:36 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 26488bb2-a0fc-4c51-923c-e29e3e6076fe 0xc0055d403f 0xc0055d4050}] [] [{kube-controller-manager Update v1 2022-05-20 22:05:36 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26488bb2-a0fc-4c51-923c-e29e3e6076fe\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-s57xs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s57xs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:05:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 22:05:38.429: INFO: Pod "webserver-deployment-795d758f88-s7xsl" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-s7xsl webserver-deployment-795d758f88- deployment-7249 8021ad5b-f60e-4124-98ce-80af0427e9bd 39608 0 2022-05-20 22:05:36 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 26488bb2-a0fc-4c51-923c-e29e3e6076fe 0xc0055d41bf 0xc0055d41d0}] [] [{kube-controller-manager Update v1 2022-05-20 22:05:36 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26488bb2-a0fc-4c51-923c-e29e3e6076fe\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-05-20 22:05:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-dsp9n,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dsp9n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:05:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:05:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:05:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:05:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:,StartTime:2022-05-20 22:05:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 22:05:38.429: INFO: Pod "webserver-deployment-795d758f88-scdfb" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-scdfb webserver-deployment-795d758f88- deployment-7249 5172381c-3003-4f28-8462-234a7855cb7f 39556 0 2022-05-20 22:05:36 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 26488bb2-a0fc-4c51-923c-e29e3e6076fe 0xc0055d439f 0xc0055d43b0}] [] [{kube-controller-manager Update v1 2022-05-20 22:05:36 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26488bb2-a0fc-4c51-923c-e29e3e6076fe\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-v9vd6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v9vd6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:05:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 22:05:38.430: INFO: Pod "webserver-deployment-847dcfb7fb-4vttp" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-4vttp webserver-deployment-847dcfb7fb- deployment-7249 48225538-3a6f-46ae-b7fb-892f28489531 39340 0 2022-05-20 22:05:22 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.230" ], "mac": "d6:e0:48:aa:67:1e", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.230" ], "mac": "d6:e0:48:aa:67:1e", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 494cac70-eaca-40dc-a346-47f62914a00e 0xc0055d451f 0xc0055d4530}] [] [{kube-controller-manager Update v1 2022-05-20 22:05:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"494cac70-eaca-40dc-a346-47f62914a00e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-05-20 22:05:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-05-20 22:05:29 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.230\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-lnt4g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lnt4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:05:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:05:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:05:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:05:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.4.230,StartTime:2022-05-20 22:05:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-05-20 22:05:27 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://dfa5095fd985291ca31e55c9cd29ed471c502b8a07468c69a22694fc59f1925e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.230,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 22:05:38.430: INFO: Pod "webserver-deployment-847dcfb7fb-bkvxm" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-bkvxm webserver-deployment-847dcfb7fb- deployment-7249 5eb0e548-e52e-49b4-af0d-12e709b6793b 39346 0 2022-05-20 22:05:22 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.231" ], "mac": "72:15:21:03:28:13", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.231" ], "mac": "72:15:21:03:28:13", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 494cac70-eaca-40dc-a346-47f62914a00e 0xc0055d471f 0xc0055d4730}] [] [{kube-controller-manager Update v1 2022-05-20 22:05:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"494cac70-eaca-40dc-a346-47f62914a00e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-05-20 22:05:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-05-20 22:05:29 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.231\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-8wqjf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8wqjf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:05:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:05:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:05:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:05:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.4.231,StartTime:2022-05-20 22:05:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-05-20 22:05:29 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://53cfceedd0bbd11aff87f7b6749998687746dec84672fc41a4831055cc90eac3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.231,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 22:05:38.430: INFO: Pod "webserver-deployment-847dcfb7fb-dqtr5" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-dqtr5 webserver-deployment-847dcfb7fb- deployment-7249 932d7a04-7931-4bbf-82ad-d9d623e783f6 39640 0 2022-05-20 22:05:38 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 494cac70-eaca-40dc-a346-47f62914a00e 0xc0055d491f 0xc0055d4930}] [] [{kube-controller-manager Update v1 2022-05-20 22:05:38 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"494cac70-eaca-40dc-a346-47f62914a00e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-w4zk5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w4zk5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 22:05:38.431: INFO: Pod "webserver-deployment-847dcfb7fb-fjzjt" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-fjzjt webserver-deployment-847dcfb7fb- deployment-7249 b3cfa3a4-b76f-4195-8aee-d5be9b298b5b 39633 0 2022-05-20 22:05:38 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 494cac70-eaca-40dc-a346-47f62914a00e 0xc0055d4a5f 0xc0055d4a70}] [] [{kube-controller-manager Update v1 2022-05-20 22:05:38 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"494cac70-eaca-40dc-a346-47f62914a00e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-fxkwc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fxkwc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:05:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 22:05:38.434: INFO: Pod "webserver-deployment-847dcfb7fb-gbphs" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-gbphs webserver-deployment-847dcfb7fb- deployment-7249 c15557bb-75b3-485e-96b1-ea0c6b437d52 39418 0 2022-05-20 22:05:22 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.14" ], "mac": "0a:e4:fd:1b:e2:cf", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.14" ], "mac": "0a:e4:fd:1b:e2:cf", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 494cac70-eaca-40dc-a346-47f62914a00e 0xc0055d4bcf 0xc0055d4be0}] [] [{kube-controller-manager Update v1 2022-05-20 22:05:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"494cac70-eaca-40dc-a346-47f62914a00e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-05-20 22:05:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-05-20 22:05:31 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.14\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-vfj6c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vfj6c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:05:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:05:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:05:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:05:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.3.14,StartTime:2022-05-20 22:05:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-05-20 22:05:28 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://f17fe254714b5591e6aa7446c8e60795652baefdcac54d62191702b0561966ef,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.14,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 22:05:38.435: INFO: Pod "webserver-deployment-847dcfb7fb-l96x8" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-l96x8 webserver-deployment-847dcfb7fb- deployment-7249 cdedecff-829d-486f-ab2d-c729028296cd 39409 0 2022-05-20 22:05:22 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.13" ], "mac": "ba:a5:92:45:79:dc", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.13" ], "mac": "ba:a5:92:45:79:dc", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 494cac70-eaca-40dc-a346-47f62914a00e 0xc0055d4dcf 0xc0055d4de0}] [] [{kube-controller-manager Update v1 2022-05-20 22:05:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"494cac70-eaca-40dc-a346-47f62914a00e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-05-20 22:05:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-05-20 22:05:31 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.13\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-8n8c8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8n8c8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:05:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:05:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:05:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:05:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.3.13,StartTime:2022-05-20 22:05:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-05-20 22:05:26 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://ddb6d987401d92a1217d1f156ffa8049f3ac572ef053ce9e50dd6bb26668cceb,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.13,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 22:05:38.436: INFO: Pod "webserver-deployment-847dcfb7fb-ng6b2" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-ng6b2 webserver-deployment-847dcfb7fb- deployment-7249 8aac12fd-0278-4dfb-a095-54ad7fd0286e 39406 0 2022-05-20 22:05:22 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.232" ], "mac": "e2:b7:5f:73:fc:51", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.232" ], "mac": "e2:b7:5f:73:fc:51", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 494cac70-eaca-40dc-a346-47f62914a00e 0xc0055d4fcf 0xc0055d4fe0}] [] [{kube-controller-manager Update v1 2022-05-20 22:05:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"494cac70-eaca-40dc-a346-47f62914a00e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-05-20 22:05:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-05-20 22:05:31 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.232\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-xkjbb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xkjbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:05:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:05:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:05:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:05:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.4.232,StartTime:2022-05-20 22:05:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-05-20 22:05:29 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://543dfb61f23a240f1311ce5096fb2030dbca1654646789e48edcaa273371c8fc,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.232,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 22:05:38.437: INFO: Pod "webserver-deployment-847dcfb7fb-svhjb" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-svhjb webserver-deployment-847dcfb7fb- deployment-7249 a77d0f67-dbba-491b-adfb-513b9e533a0f 39639 0 2022-05-20 22:05:38 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 494cac70-eaca-40dc-a346-47f62914a00e 0xc0055d51cf 0xc0055d51e0}] [] [{kube-controller-manager Update v1 2022-05-20 22:05:38 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"494cac70-eaca-40dc-a346-47f62914a00e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-km5gz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-km5gz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 22:05:38.440: INFO: Pod "webserver-deployment-847dcfb7fb-vx7dn" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-vx7dn webserver-deployment-847dcfb7fb- deployment-7249 b8e3dcc9-f864-4192-adeb-d5466a357a1e 39502 0 2022-05-20 22:05:22 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.15" ], "mac": "fa:a4:e4:76:a8:4c", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.15" ], "mac": "fa:a4:e4:76:a8:4c", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 494cac70-eaca-40dc-a346-47f62914a00e 0xc0055d530f 0xc0055d5320}] [] [{kube-controller-manager Update v1 2022-05-20 22:05:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"494cac70-eaca-40dc-a346-47f62914a00e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-05-20 22:05:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-05-20 22:05:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.15\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-hbk5l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hbk5l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:05:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:05:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:05:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:05:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.3.15,StartTime:2022-05-20 22:05:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-05-20 22:05:29 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://86968b413336c9156a4517a2e2fc6fe1f7154700e59366adc69c44a3886a4c51,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.15,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 22:05:38.440: INFO: Pod "webserver-deployment-847dcfb7fb-x6zj7" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-x6zj7 webserver-deployment-847dcfb7fb- deployment-7249 ac1584e2-39f4-4008-ad76-c0ee1dd49017 39343 0 2022-05-20 22:05:22 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.229" ], "mac": "b6:4b:80:9d:fe:f4", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.229" ], "mac": "b6:4b:80:9d:fe:f4", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 494cac70-eaca-40dc-a346-47f62914a00e 0xc0055d550f 0xc0055d5520}] [] [{kube-controller-manager Update v1 2022-05-20 22:05:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"494cac70-eaca-40dc-a346-47f62914a00e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-05-20 22:05:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-05-20 22:05:29 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.229\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-ctmrn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ctmrn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:05:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:05:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:05:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:05:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.4.229,StartTime:2022-05-20 22:05:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-05-20 22:05:27 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://2b672c4494e6785fd61e877be542abb9885c2e4db1168efe5ae22f1c0903fb8f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.229,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 22:05:38.441: INFO: Pod "webserver-deployment-847dcfb7fb-xt6t6" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-xt6t6 webserver-deployment-847dcfb7fb- deployment-7249 5a013d64-af27-4bb0-9601-015b17677427 39391 0 2022-05-20 22:05:22 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.233" ], "mac": "e2:4f:de:58:8f:bd", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.233" ], "mac": "e2:4f:de:58:8f:bd", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 494cac70-eaca-40dc-a346-47f62914a00e 0xc0055d570f 0xc0055d5720}] [] [{kube-controller-manager Update v1 2022-05-20 22:05:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"494cac70-eaca-40dc-a346-47f62914a00e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-05-20 22:05:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-05-20 22:05:30 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.233\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-c2q4k,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c2q4k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:05:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:05:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:05:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:05:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.4.233,StartTime:2022-05-20 22:05:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-05-20 22:05:29 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://7953fbdb9b4531099be1288dfea154c968fa088d243503ab84cb047ebf6349b2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.233,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:05:38.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7249" for this suite. • [SLOW TEST:16.134 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":-1,"completed":22,"skipped":327,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:05:22.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating replication controller my-hostname-basic-7fded5f7-fa8f-493e-b1a6-d809b4215f07 May 20 22:05:22.537: INFO: Pod name my-hostname-basic-7fded5f7-fa8f-493e-b1a6-d809b4215f07: Found 0 pods out of 1 May 20 22:05:27.541: INFO: Pod name my-hostname-basic-7fded5f7-fa8f-493e-b1a6-d809b4215f07: Found 1 pods out of 1 May 20 22:05:27.541: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-7fded5f7-fa8f-493e-b1a6-d809b4215f07" are running May 20 22:05:33.547: INFO: Pod "my-hostname-basic-7fded5f7-fa8f-493e-b1a6-d809b4215f07-lw462" is running (conditions: [{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-20 22:05:22 +0000 UTC Reason: Message:}]) May 20 22:05:33.547: INFO: Trying to dial the pod May 20 22:05:38.558: INFO: Controller my-hostname-basic-7fded5f7-fa8f-493e-b1a6-d809b4215f07: Got expected result from replica 1 [my-hostname-basic-7fded5f7-fa8f-493e-b1a6-d809b4215f07-lw462]: "my-hostname-basic-7fded5f7-fa8f-493e-b1a6-d809b4215f07-lw462", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:05:38.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2055" for this suite. • [SLOW TEST:16.058 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":-1,"completed":16,"skipped":346,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:05:33.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 20 22:05:33.227: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 20 22:05:35.236: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681133, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681133, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681133, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681133, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 20 22:05:38.248: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 20 22:05:38.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:05:46.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-4412" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:13.321 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":-1,"completed":11,"skipped":65,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:05:46.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create set of events STEP: get a list of Events with a label in the current namespace STEP: delete a list of events May 20 22:05:46.464: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:05:46.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-7137" for this suite. • ------------------------------ {"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":-1,"completed":12,"skipped":95,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:05:37.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5240.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5240.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 20 22:05:47.374: INFO: DNS probes using dns-5240/dns-test-4c033e5d-7d01-4d29-b1ea-360c24c6385e succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:05:47.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5240" for this suite. • [SLOW TEST:10.086 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":-1,"completed":16,"skipped":309,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:05:33.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 20 22:05:33.920: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 20 22:05:42.580: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2786 --namespace=crd-publish-openapi-2786 create -f -' May 20 22:05:43.126: INFO: stderr: "" May 20 22:05:43.126: INFO: stdout: "e2e-test-crd-publish-openapi-3437-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 20 22:05:43.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2786 --namespace=crd-publish-openapi-2786 delete e2e-test-crd-publish-openapi-3437-crds test-cr' May 20 22:05:43.304: INFO: stderr: "" May 20 22:05:43.304: INFO: stdout: "e2e-test-crd-publish-openapi-3437-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" May 20 22:05:43.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2786 --namespace=crd-publish-openapi-2786 apply -f -' May 20 22:05:43.640: INFO: stderr: "" May 20 22:05:43.640: INFO: stdout: "e2e-test-crd-publish-openapi-3437-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 20 22:05:43.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2786 --namespace=crd-publish-openapi-2786 delete e2e-test-crd-publish-openapi-3437-crds test-cr' May 20 22:05:43.807: INFO: stderr: "" May 20 22:05:43.807: INFO: stdout: "e2e-test-crd-publish-openapi-3437-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 20 22:05:43.807: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2786 explain e2e-test-crd-publish-openapi-3437-crds' May 20 22:05:44.176: INFO: stderr: "" May 20 22:05:44.176: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3437-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<>\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:05:48.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2786" for this suite. • [SLOW TEST:14.442 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":-1,"completed":11,"skipped":251,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:05:38.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 20 22:05:38.630: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5266 9c665e37-3045-4db2-8945-49349558cb6f 39729 0 2022-05-20 22:05:38 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-05-20 22:05:38 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 20 22:05:38.630: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5266 9c665e37-3045-4db2-8945-49349558cb6f 39730 0 2022-05-20 22:05:38 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-05-20 22:05:38 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} May 20 22:05:38.630: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5266 9c665e37-3045-4db2-8945-49349558cb6f 39731 0 2022-05-20 22:05:38 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-05-20 22:05:38 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 20 22:05:48.651: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5266 9c665e37-3045-4db2-8945-49349558cb6f 40144 0 2022-05-20 22:05:38 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-05-20 22:05:38 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 20 22:05:48.652: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5266 9c665e37-3045-4db2-8945-49349558cb6f 40145 0 2022-05-20 22:05:38 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-05-20 22:05:38 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} May 20 22:05:48.652: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5266 9c665e37-3045-4db2-8945-49349558cb6f 40146 0 2022-05-20 22:05:38 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-05-20 22:05:38 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:05:48.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5266" for this suite. • [SLOW TEST:10.064 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":-1,"completed":17,"skipped":364,"failed":0} [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:05:48.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin May 20 22:05:48.696: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b8605d62-74af-4ce3-82fd-ad70b0b6c093" in namespace "downward-api-3579" to be "Succeeded or Failed" May 20 22:05:48.699: INFO: Pod "downwardapi-volume-b8605d62-74af-4ce3-82fd-ad70b0b6c093": Phase="Pending", Reason="", readiness=false. Elapsed: 2.881207ms May 20 22:05:50.704: INFO: Pod "downwardapi-volume-b8605d62-74af-4ce3-82fd-ad70b0b6c093": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008056546s May 20 22:05:52.707: INFO: Pod "downwardapi-volume-b8605d62-74af-4ce3-82fd-ad70b0b6c093": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011403099s May 20 22:05:54.712: INFO: Pod "downwardapi-volume-b8605d62-74af-4ce3-82fd-ad70b0b6c093": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016333982s STEP: Saw pod success May 20 22:05:54.712: INFO: Pod "downwardapi-volume-b8605d62-74af-4ce3-82fd-ad70b0b6c093" satisfied condition "Succeeded or Failed" May 20 22:05:54.716: INFO: Trying to get logs from node node1 pod downwardapi-volume-b8605d62-74af-4ce3-82fd-ad70b0b6c093 container client-container: STEP: delete the pod May 20 22:05:54.730: INFO: Waiting for pod downwardapi-volume-b8605d62-74af-4ce3-82fd-ad70b0b6c093 to disappear May 20 22:05:54.733: INFO: Pod downwardapi-volume-b8605d62-74af-4ce3-82fd-ad70b0b6c093 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:05:54.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3579" for this suite. • [SLOW TEST:6.079 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":364,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:05:54.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should complete a service status lifecycle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Service STEP: watching for the Service to be added May 20 22:05:54.809: INFO: Found Service test-service-5smbx in namespace services-4806 with labels: map[test-service-static:true] & ports [{http TCP 80 {0 80 } 0}] May 20 22:05:54.809: INFO: Service test-service-5smbx created STEP: Getting /status May 20 22:05:54.812: INFO: Service test-service-5smbx has LoadBalancer: {[]} STEP: patching the ServiceStatus STEP: watching for the Service to be patched May 20 22:05:54.817: INFO: observed Service test-service-5smbx in namespace services-4806 with annotations: map[] & LoadBalancer: {[]} May 20 22:05:54.817: INFO: Found Service test-service-5smbx in namespace services-4806 with annotations: map[patchedstatus:true] & LoadBalancer: {[{203.0.113.1 []}]} May 20 22:05:54.817: INFO: Service test-service-5smbx has service status patched STEP: updating the ServiceStatus May 20 22:05:54.822: INFO: updatedStatus.Conditions: []v1.Condition{v1.Condition{Type:"StatusUpdate", Status:"True", ObservedGeneration:0, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"E2E", Message:"Set from e2e test"}} STEP: watching for the Service to be updated May 20 22:05:54.824: INFO: Observed Service test-service-5smbx in namespace services-4806 with annotations: map[] & Conditions: {[]} May 20 22:05:54.824: INFO: Observed event: &Service{ObjectMeta:{test-service-5smbx services-4806 7c1c167c-83e3-42b9-8943-9a55f81e681b 40280 0 2022-05-20 22:05:54 +0000 UTC map[test-service-static:true] map[patchedstatus:true] [] [] [{e2e.test Update v1 2022-05-20 22:05:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:patchedstatus":{}},"f:labels":{".":{},"f:test-service-static":{}}},"f:spec":{"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:sessionAffinity":{},"f:type":{}},"f:status":{"f:loadBalancer":{"f:ingress":{}}}}}]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{0 80 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.233.56.5,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,TopologyKeys:[],IPFamilyPolicy:*SingleStack,ClusterIPs:[10.233.56.5],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{LoadBalancerIngress{IP:203.0.113.1,Hostname:,Ports:[]PortStatus{},},},},Conditions:[]Condition{},},} May 20 22:05:54.824: INFO: Found Service test-service-5smbx in namespace services-4806 with annotations: map[patchedstatus:true] & Conditions: [{StatusUpdate True 0 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] May 20 22:05:54.824: INFO: Service test-service-5smbx has service status updated STEP: patching the service STEP: watching for the Service to be patched May 20 22:05:54.835: INFO: observed Service test-service-5smbx in namespace services-4806 with labels: map[test-service-static:true] May 20 22:05:54.835: INFO: observed Service test-service-5smbx in namespace services-4806 with labels: map[test-service-static:true] May 20 22:05:54.835: INFO: observed Service test-service-5smbx in namespace services-4806 with labels: map[test-service-static:true] May 20 22:05:54.835: INFO: Found Service test-service-5smbx in namespace services-4806 with labels: map[test-service:patched test-service-static:true] May 20 22:05:54.835: INFO: Service test-service-5smbx patched STEP: deleting the service STEP: watching for the Service to be deleted May 20 22:05:54.843: INFO: Observed event: ADDED May 20 22:05:54.843: INFO: Observed event: MODIFIED May 20 22:05:54.843: INFO: Observed event: MODIFIED May 20 22:05:54.843: INFO: Observed event: MODIFIED May 20 22:05:54.843: INFO: Found Service test-service-5smbx in namespace services-4806 with labels: map[test-service:patched test-service-static:true] & annotations: map[patchedstatus:true] May 20 22:05:54.843: INFO: Service test-service-5smbx deleted [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:05:54.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4806" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • ------------------------------ {"msg":"PASSED [sig-network] Services should complete a service status lifecycle [Conformance]","total":-1,"completed":19,"skipped":377,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:05:47.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Setting up the test STEP: Creating hostNetwork=false pod May 20 22:05:47.453: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) May 20 22:05:49.458: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) May 20 22:05:51.459: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) May 20 22:05:53.456: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) May 20 22:05:55.460: INFO: The status of Pod test-pod is Running (Ready = true) STEP: Creating hostNetwork=true pod May 20 22:05:55.475: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) May 20 22:05:57.478: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) May 20 22:05:59.480: INFO: The status of Pod test-host-network-pod is Running (Ready = true) STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 20 22:05:59.483: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-284 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 22:05:59.483: INFO: >>> kubeConfig: /root/.kube/config May 20 22:05:59.565: INFO: Exec stderr: "" May 20 22:05:59.565: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-284 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 22:05:59.565: INFO: >>> kubeConfig: /root/.kube/config May 20 22:05:59.647: INFO: Exec stderr: "" May 20 22:05:59.647: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-284 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 22:05:59.647: INFO: >>> kubeConfig: /root/.kube/config May 20 22:05:59.730: INFO: Exec stderr: "" May 20 22:05:59.731: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-284 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 22:05:59.731: INFO: >>> kubeConfig: /root/.kube/config May 20 22:05:59.859: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 20 22:05:59.859: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-284 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 22:05:59.859: INFO: >>> kubeConfig: /root/.kube/config May 20 22:05:59.949: INFO: Exec stderr: "" May 20 22:05:59.949: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-284 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 22:05:59.949: INFO: >>> kubeConfig: /root/.kube/config May 20 22:06:00.048: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 20 22:06:00.048: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-284 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 22:06:00.048: INFO: >>> kubeConfig: /root/.kube/config May 20 22:06:00.211: INFO: Exec stderr: "" May 20 22:06:00.211: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-284 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 22:06:00.211: INFO: >>> kubeConfig: /root/.kube/config May 20 22:06:00.295: INFO: Exec stderr: "" May 20 22:06:00.295: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-284 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 22:06:00.295: INFO: >>> kubeConfig: /root/.kube/config May 20 22:06:00.433: INFO: Exec stderr: "" May 20 22:06:00.433: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-284 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 22:06:00.433: INFO: >>> kubeConfig: /root/.kube/config May 20 22:06:00.570: INFO: Exec stderr: "" [AfterEach] [sig-node] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:06:00.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-284" for this suite. • [SLOW TEST:13.161 seconds] [sig-node] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":328,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:05:54.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: submitting the pod to kubernetes May 20 22:05:54.923: INFO: The status of Pod pod-update-activedeadlineseconds-9fb58fa5-031e-45e5-ba5f-ecf9e56acb67 is Pending, waiting for it to be Running (with Ready = true) May 20 22:05:56.927: INFO: The status of Pod pod-update-activedeadlineseconds-9fb58fa5-031e-45e5-ba5f-ecf9e56acb67 is Pending, waiting for it to be Running (with Ready = true) May 20 22:05:58.928: INFO: The status of Pod pod-update-activedeadlineseconds-9fb58fa5-031e-45e5-ba5f-ecf9e56acb67 is Running (Ready = true) STEP: verifying the pod is in kubernetes STEP: updating the pod May 20 22:05:59.443: INFO: Successfully updated pod "pod-update-activedeadlineseconds-9fb58fa5-031e-45e5-ba5f-ecf9e56acb67" May 20 22:05:59.443: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-9fb58fa5-031e-45e5-ba5f-ecf9e56acb67" in namespace "pods-9675" to be "terminated due to deadline exceeded" May 20 22:05:59.446: INFO: Pod "pod-update-activedeadlineseconds-9fb58fa5-031e-45e5-ba5f-ecf9e56acb67": Phase="Running", Reason="", readiness=true. Elapsed: 2.616038ms May 20 22:06:01.448: INFO: Pod "pod-update-activedeadlineseconds-9fb58fa5-031e-45e5-ba5f-ecf9e56acb67": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.00539545s May 20 22:06:01.448: INFO: Pod "pod-update-activedeadlineseconds-9fb58fa5-031e-45e5-ba5f-ecf9e56acb67" satisfied condition "terminated due to deadline exceeded" [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:06:01.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9675" for this suite. • [SLOW TEST:6.569 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":388,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:06:01.494: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:06:01.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9754" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Secrets should patch a secret [Conformance]","total":-1,"completed":21,"skipped":403,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:05:38.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 20 22:05:39.112: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 20 22:05:41.121: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681139, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681139, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681139, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681139, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 20 22:05:43.124: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681139, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681139, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681139, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681139, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 20 22:05:45.126: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681139, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681139, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681139, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681139, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 20 22:05:47.125: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681139, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681139, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681139, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681139, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 20 22:05:49.126: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681139, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681139, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681139, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681139, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 20 22:05:52.132: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:06:04.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2173" for this suite. STEP: Destroying namespace "webhook-2173-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:25.803 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":-1,"completed":23,"skipped":339,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:05:48.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:06:04.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9239" for this suite. • [SLOW TEST:16.114 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":-1,"completed":12,"skipped":258,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} S ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:06:00.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test substitution in container's command May 20 22:06:00.664: INFO: Waiting up to 5m0s for pod "var-expansion-5179d26b-dc8a-444d-9ec3-6d82e09dc57b" in namespace "var-expansion-4824" to be "Succeeded or Failed" May 20 22:06:00.667: INFO: Pod "var-expansion-5179d26b-dc8a-444d-9ec3-6d82e09dc57b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.670114ms May 20 22:06:02.671: INFO: Pod "var-expansion-5179d26b-dc8a-444d-9ec3-6d82e09dc57b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007236558s May 20 22:06:04.675: INFO: Pod "var-expansion-5179d26b-dc8a-444d-9ec3-6d82e09dc57b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011567666s STEP: Saw pod success May 20 22:06:04.675: INFO: Pod "var-expansion-5179d26b-dc8a-444d-9ec3-6d82e09dc57b" satisfied condition "Succeeded or Failed" May 20 22:06:04.678: INFO: Trying to get logs from node node2 pod var-expansion-5179d26b-dc8a-444d-9ec3-6d82e09dc57b container dapi-container: STEP: delete the pod May 20 22:06:04.693: INFO: Waiting for pod var-expansion-5179d26b-dc8a-444d-9ec3-6d82e09dc57b to disappear May 20 22:06:04.695: INFO: Pod var-expansion-5179d26b-dc8a-444d-9ec3-6d82e09dc57b no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:06:04.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4824" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":351,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:05:46.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1548 [It] should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 May 20 22:05:46.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9376 run e2e-test-httpd-pod --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 --labels=run=e2e-test-httpd-pod' May 20 22:05:46.819: INFO: stderr: "" May 20 22:05:46.819: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created May 20 22:05:51.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9376 get pod e2e-test-httpd-pod -o json' May 20 22:05:52.053: INFO: stderr: "" May 20 22:05:52.053: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"annotations\": {\n \"k8s.v1.cni.cncf.io/network-status\": \"[{\\n \\\"name\\\": \\\"default-cni-network\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.244.3.25\\\"\\n ],\\n \\\"mac\\\": \\\"62:8a:50:a8:cd:38\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\n \"k8s.v1.cni.cncf.io/networks-status\": \"[{\\n \\\"name\\\": \\\"default-cni-network\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.244.3.25\\\"\\n ],\\n \\\"mac\\\": \\\"62:8a:50:a8:cd:38\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\n \"kubernetes.io/psp\": \"collectd\"\n },\n \"creationTimestamp\": \"2022-05-20T22:05:46Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-9376\",\n \"resourceVersion\": \"40177\",\n \"uid\": \"d9a931b2-275f-4d47-81bd-2a37200cef15\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\",\n \"imagePullPolicy\": \"Always\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"kube-api-access-bmq6x\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"node2\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"kube-api-access-bmq6x\",\n \"projected\": {\n \"defaultMode\": 420,\n \"sources\": [\n {\n \"serviceAccountToken\": {\n \"expirationSeconds\": 3607,\n \"path\": \"token\"\n }\n },\n {\n \"configMap\": {\n \"items\": [\n {\n \"key\": \"ca.crt\",\n \"path\": \"ca.crt\"\n }\n ],\n \"name\": \"kube-root-ca.crt\"\n }\n },\n {\n \"downwardAPI\": {\n \"items\": [\n {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"metadata.namespace\"\n },\n \"path\": \"namespace\"\n }\n ]\n }\n }\n ]\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2022-05-20T22:05:46Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2022-05-20T22:05:51Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2022-05-20T22:05:51Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2022-05-20T22:05:46Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"docker://5e927661b4575f6668576d56bcd4b1261ab6117e57cfa1f01af2ed2a4580067e\",\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\",\n \"imageID\": \"docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2022-05-20T22:05:50Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.10.190.208\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.3.25\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.3.25\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2022-05-20T22:05:46Z\"\n }\n}\n" STEP: replace the image in the pod May 20 22:05:52.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9376 replace -f -' May 20 22:05:52.462: INFO: stderr: "" May 20 22:05:52.462: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image k8s.gcr.io/e2e-test-images/busybox:1.29-1 [AfterEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1552 May 20 22:05:52.465: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9376 delete pods e2e-test-httpd-pod' May 20 22:06:06.845: INFO: stderr: "" May 20 22:06:06.845: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:06:06.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9376" for this suite. • [SLOW TEST:20.224 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1545 should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":-1,"completed":13,"skipped":159,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:06:01.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0777 on tmpfs May 20 22:06:01.624: INFO: Waiting up to 5m0s for pod "pod-e71c33ae-ebe3-4e87-84da-78d364571e54" in namespace "emptydir-6932" to be "Succeeded or Failed" May 20 22:06:01.626: INFO: Pod "pod-e71c33ae-ebe3-4e87-84da-78d364571e54": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014474ms May 20 22:06:03.630: INFO: Pod "pod-e71c33ae-ebe3-4e87-84da-78d364571e54": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005545947s May 20 22:06:05.634: INFO: Pod "pod-e71c33ae-ebe3-4e87-84da-78d364571e54": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009778756s May 20 22:06:07.637: INFO: Pod "pod-e71c33ae-ebe3-4e87-84da-78d364571e54": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013119159s STEP: Saw pod success May 20 22:06:07.637: INFO: Pod "pod-e71c33ae-ebe3-4e87-84da-78d364571e54" satisfied condition "Succeeded or Failed" May 20 22:06:07.640: INFO: Trying to get logs from node node2 pod pod-e71c33ae-ebe3-4e87-84da-78d364571e54 container test-container: STEP: delete the pod May 20 22:06:07.661: INFO: Waiting for pod pod-e71c33ae-ebe3-4e87-84da-78d364571e54 to disappear May 20 22:06:07.663: INFO: Pod pod-e71c33ae-ebe3-4e87-84da-78d364571e54 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:06:07.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6932" for this suite. • [SLOW TEST:6.081 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":415,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:06:04.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test override command May 20 22:06:04.508: INFO: Waiting up to 5m0s for pod "client-containers-02a7e083-b8a4-4758-8216-1b972dfe9a8b" in namespace "containers-7856" to be "Succeeded or Failed" May 20 22:06:04.510: INFO: Pod "client-containers-02a7e083-b8a4-4758-8216-1b972dfe9a8b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01306ms May 20 22:06:06.514: INFO: Pod "client-containers-02a7e083-b8a4-4758-8216-1b972dfe9a8b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006364011s May 20 22:06:08.519: INFO: Pod "client-containers-02a7e083-b8a4-4758-8216-1b972dfe9a8b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010466395s May 20 22:06:10.523: INFO: Pod "client-containers-02a7e083-b8a4-4758-8216-1b972dfe9a8b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014419067s STEP: Saw pod success May 20 22:06:10.523: INFO: Pod "client-containers-02a7e083-b8a4-4758-8216-1b972dfe9a8b" satisfied condition "Succeeded or Failed" May 20 22:06:10.525: INFO: Trying to get logs from node node2 pod client-containers-02a7e083-b8a4-4758-8216-1b972dfe9a8b container agnhost-container: STEP: delete the pod May 20 22:06:10.640: INFO: Waiting for pod client-containers-02a7e083-b8a4-4758-8216-1b972dfe9a8b to disappear May 20 22:06:10.642: INFO: Pod client-containers-02a7e083-b8a4-4758-8216-1b972dfe9a8b no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:06:10.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7856" for this suite. • [SLOW TEST:6.175 seconds] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":259,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:06:06.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-d4d611e8-6669-43b1-9bba-a93c44f60bf3 STEP: Creating a pod to test consume configMaps May 20 22:06:07.003: INFO: Waiting up to 5m0s for pod "pod-configmaps-14605846-2191-43e1-b44d-47e58ff7e243" in namespace "configmap-7780" to be "Succeeded or Failed" May 20 22:06:07.005: INFO: Pod "pod-configmaps-14605846-2191-43e1-b44d-47e58ff7e243": Phase="Pending", Reason="", readiness=false. Elapsed: 2.466991ms May 20 22:06:09.011: INFO: Pod "pod-configmaps-14605846-2191-43e1-b44d-47e58ff7e243": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007857s May 20 22:06:11.014: INFO: Pod "pod-configmaps-14605846-2191-43e1-b44d-47e58ff7e243": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011307982s STEP: Saw pod success May 20 22:06:11.014: INFO: Pod "pod-configmaps-14605846-2191-43e1-b44d-47e58ff7e243" satisfied condition "Succeeded or Failed" May 20 22:06:11.016: INFO: Trying to get logs from node node2 pod pod-configmaps-14605846-2191-43e1-b44d-47e58ff7e243 container agnhost-container: STEP: delete the pod May 20 22:06:11.031: INFO: Waiting for pod pod-configmaps-14605846-2191-43e1-b44d-47e58ff7e243 to disappear May 20 22:06:11.032: INFO: Pod pod-configmaps-14605846-2191-43e1-b44d-47e58ff7e243 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:06:11.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7780" for this suite. • ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:06:07.712: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin May 20 22:06:07.745: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5c98060c-93ab-4ebd-a88a-e2f04378337d" in namespace "downward-api-3945" to be "Succeeded or Failed" May 20 22:06:07.747: INFO: Pod "downwardapi-volume-5c98060c-93ab-4ebd-a88a-e2f04378337d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090008ms May 20 22:06:09.750: INFO: Pod "downwardapi-volume-5c98060c-93ab-4ebd-a88a-e2f04378337d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005141069s May 20 22:06:11.754: INFO: Pod "downwardapi-volume-5c98060c-93ab-4ebd-a88a-e2f04378337d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00914592s STEP: Saw pod success May 20 22:06:11.754: INFO: Pod "downwardapi-volume-5c98060c-93ab-4ebd-a88a-e2f04378337d" satisfied condition "Succeeded or Failed" May 20 22:06:11.757: INFO: Trying to get logs from node node1 pod downwardapi-volume-5c98060c-93ab-4ebd-a88a-e2f04378337d container client-container: STEP: delete the pod May 20 22:06:11.769: INFO: Waiting for pod downwardapi-volume-5c98060c-93ab-4ebd-a88a-e2f04378337d to disappear May 20 22:06:11.771: INFO: Pod downwardapi-volume-5c98060c-93ab-4ebd-a88a-e2f04378337d no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:06:11.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3945" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":433,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:06:11.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with configMap that has name projected-configmap-test-upd-368ebf5b-ce79-4f60-984c-ee0453089453 STEP: Creating the pod May 20 22:06:11.831: INFO: The status of Pod pod-projected-configmaps-d174a7b0-25f0-442a-a13f-abdef3de6f47 is Pending, waiting for it to be Running (with Ready = true) May 20 22:06:13.836: INFO: The status of Pod pod-projected-configmaps-d174a7b0-25f0-442a-a13f-abdef3de6f47 is Pending, waiting for it to be Running (with Ready = true) May 20 22:06:15.835: INFO: The status of Pod pod-projected-configmaps-d174a7b0-25f0-442a-a13f-abdef3de6f47 is Running (Ready = true) STEP: Updating configmap projected-configmap-test-upd-368ebf5b-ce79-4f60-984c-ee0453089453 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:06:17.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1090" for this suite. • [SLOW TEST:6.088 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":434,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":213,"failed":0} [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:06:11.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:06:18.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8038" for this suite. • [SLOW TEST:7.039 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":-1,"completed":15,"skipped":213,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:06:04.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:06:21.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3809" for this suite. • [SLOW TEST:17.072 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":-1,"completed":24,"skipped":367,"failed":0} SS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:06:21.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should find a service from listing all namespaces [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: fetching services [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:06:21.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-385" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • ------------------------------ {"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":-1,"completed":25,"skipped":369,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:06:17.909: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-b404c8da-3323-4e37-acfa-47473efc8f49 STEP: Creating a pod to test consume secrets May 20 22:06:17.949: INFO: Waiting up to 5m0s for pod "pod-secrets-deafb277-d320-4d8d-975f-bd28353c027c" in namespace "secrets-9980" to be "Succeeded or Failed" May 20 22:06:17.953: INFO: Pod "pod-secrets-deafb277-d320-4d8d-975f-bd28353c027c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.178482ms May 20 22:06:19.956: INFO: Pod "pod-secrets-deafb277-d320-4d8d-975f-bd28353c027c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00720438s May 20 22:06:21.961: INFO: Pod "pod-secrets-deafb277-d320-4d8d-975f-bd28353c027c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011435516s STEP: Saw pod success May 20 22:06:21.961: INFO: Pod "pod-secrets-deafb277-d320-4d8d-975f-bd28353c027c" satisfied condition "Succeeded or Failed" May 20 22:06:21.963: INFO: Trying to get logs from node node1 pod pod-secrets-deafb277-d320-4d8d-975f-bd28353c027c container secret-volume-test: STEP: delete the pod May 20 22:06:21.983: INFO: Waiting for pod pod-secrets-deafb277-d320-4d8d-975f-bd28353c027c to disappear May 20 22:06:21.984: INFO: Pod pod-secrets-deafb277-d320-4d8d-975f-bd28353c027c no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:06:21.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9980" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":450,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:06:18.194: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-c5f85a47-2670-409c-909a-44e425cf3ca1 STEP: Creating a pod to test consume configMaps May 20 22:06:18.230: INFO: Waiting up to 5m0s for pod "pod-configmaps-c4c3c156-4246-4a91-a79e-03750aeb1b78" in namespace "configmap-7219" to be "Succeeded or Failed" May 20 22:06:18.232: INFO: Pod "pod-configmaps-c4c3c156-4246-4a91-a79e-03750aeb1b78": Phase="Pending", Reason="", readiness=false. Elapsed: 1.869997ms May 20 22:06:20.236: INFO: Pod "pod-configmaps-c4c3c156-4246-4a91-a79e-03750aeb1b78": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005676811s May 20 22:06:22.241: INFO: Pod "pod-configmaps-c4c3c156-4246-4a91-a79e-03750aeb1b78": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011621854s STEP: Saw pod success May 20 22:06:22.242: INFO: Pod "pod-configmaps-c4c3c156-4246-4a91-a79e-03750aeb1b78" satisfied condition "Succeeded or Failed" May 20 22:06:22.246: INFO: Trying to get logs from node node2 pod pod-configmaps-c4c3c156-4246-4a91-a79e-03750aeb1b78 container agnhost-container: STEP: delete the pod May 20 22:06:22.304: INFO: Waiting for pod pod-configmaps-c4c3c156-4246-4a91-a79e-03750aeb1b78 to disappear May 20 22:06:22.306: INFO: Pod pod-configmaps-c4c3c156-4246-4a91-a79e-03750aeb1b78 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:06:22.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7219" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":270,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:06:21.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should adopt matching pods on creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Given a Pod with a 'name' label pod-adoption is created May 20 22:06:21.561: INFO: The status of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) May 20 22:06:23.565: INFO: The status of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) May 20 22:06:25.566: INFO: The status of Pod pod-adoption is Running (Ready = true) STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:06:26.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3786" for this suite. • [SLOW TEST:5.072 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":-1,"completed":26,"skipped":389,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:06:22.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 20 22:06:22.101: INFO: Creating ReplicaSet my-hostname-basic-4aae82af-5692-45ac-b970-8aea19b811d5 May 20 22:06:22.107: INFO: Pod name my-hostname-basic-4aae82af-5692-45ac-b970-8aea19b811d5: Found 0 pods out of 1 May 20 22:06:27.112: INFO: Pod name my-hostname-basic-4aae82af-5692-45ac-b970-8aea19b811d5: Found 1 pods out of 1 May 20 22:06:27.112: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-4aae82af-5692-45ac-b970-8aea19b811d5" is running May 20 22:06:27.114: INFO: Pod "my-hostname-basic-4aae82af-5692-45ac-b970-8aea19b811d5-dsldx" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-20 22:06:22 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-20 22:06:25 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-20 22:06:25 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-20 22:06:22 +0000 UTC Reason: Message:}]) May 20 22:06:27.115: INFO: Trying to dial the pod May 20 22:06:32.125: INFO: Controller my-hostname-basic-4aae82af-5692-45ac-b970-8aea19b811d5: Got expected result from replica 1 [my-hostname-basic-4aae82af-5692-45ac-b970-8aea19b811d5-dsldx]: "my-hostname-basic-4aae82af-5692-45ac-b970-8aea19b811d5-dsldx", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:06:32.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-2617" for this suite. • [SLOW TEST:10.054 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":-1,"completed":26,"skipped":491,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:06:04.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:06:36.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3283" for this suite. • [SLOW TEST:31.238 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when starting a container that exits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":393,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:06:26.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating Agnhost RC May 20 22:06:26.626: INFO: namespace kubectl-4469 May 20 22:06:26.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4469 create -f -' May 20 22:06:26.986: INFO: stderr: "" May 20 22:06:26.986: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. May 20 22:06:27.989: INFO: Selector matched 1 pods for map[app:agnhost] May 20 22:06:27.989: INFO: Found 0 / 1 May 20 22:06:28.992: INFO: Selector matched 1 pods for map[app:agnhost] May 20 22:06:28.992: INFO: Found 0 / 1 May 20 22:06:29.991: INFO: Selector matched 1 pods for map[app:agnhost] May 20 22:06:29.991: INFO: Found 0 / 1 May 20 22:06:30.989: INFO: Selector matched 1 pods for map[app:agnhost] May 20 22:06:30.989: INFO: Found 0 / 1 May 20 22:06:31.991: INFO: Selector matched 1 pods for map[app:agnhost] May 20 22:06:31.991: INFO: Found 1 / 1 May 20 22:06:31.991: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 20 22:06:31.994: INFO: Selector matched 1 pods for map[app:agnhost] May 20 22:06:31.994: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 20 22:06:31.994: INFO: wait on agnhost-primary startup in kubectl-4469 May 20 22:06:31.994: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4469 logs agnhost-primary-xgvcf agnhost-primary' May 20 22:06:32.156: INFO: stderr: "" May 20 22:06:32.156: INFO: stdout: "Paused\n" STEP: exposing RC May 20 22:06:32.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4469 expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379' May 20 22:06:32.387: INFO: stderr: "" May 20 22:06:32.387: INFO: stdout: "service/rm2 exposed\n" May 20 22:06:32.390: INFO: Service rm2 in namespace kubectl-4469 found. STEP: exposing service May 20 22:06:34.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4469 expose service rm2 --name=rm3 --port=2345 --target-port=6379' May 20 22:06:34.602: INFO: stderr: "" May 20 22:06:34.602: INFO: stdout: "service/rm3 exposed\n" May 20 22:06:34.605: INFO: Service rm3 in namespace kubectl-4469 found. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:06:36.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4469" for this suite. • [SLOW TEST:10.013 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1223 should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":-1,"completed":27,"skipped":396,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:06:36.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting the auto-created API token May 20 22:06:36.597: INFO: created pod pod-service-account-defaultsa May 20 22:06:36.597: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 20 22:06:36.605: INFO: created pod pod-service-account-mountsa May 20 22:06:36.605: INFO: pod pod-service-account-mountsa service account token volume mount: true May 20 22:06:36.614: INFO: created pod pod-service-account-nomountsa May 20 22:06:36.614: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 20 22:06:36.622: INFO: created pod pod-service-account-defaultsa-mountspec May 20 22:06:36.622: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 20 22:06:36.632: INFO: created pod pod-service-account-mountsa-mountspec May 20 22:06:36.632: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 20 22:06:36.641: INFO: created pod pod-service-account-nomountsa-mountspec May 20 22:06:36.641: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 20 22:06:36.650: INFO: created pod pod-service-account-defaultsa-nomountspec May 20 22:06:36.650: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 20 22:06:36.658: INFO: created pod pod-service-account-mountsa-nomountspec May 20 22:06:36.658: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 20 22:06:36.667: INFO: created pod pod-service-account-nomountsa-nomountspec May 20 22:06:36.667: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:06:36.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-76" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":-1,"completed":20,"skipped":403,"failed":0} SS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:06:36.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:06:36.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9483" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":-1,"completed":21,"skipped":405,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:06:22.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-tzpcl in namespace proxy-6454 I0520 22:06:22.419874 24 runners.go:190] Created replication controller with name: proxy-service-tzpcl, namespace: proxy-6454, replica count: 1 I0520 22:06:23.471099 24 runners.go:190] proxy-service-tzpcl Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0520 22:06:24.472165 24 runners.go:190] proxy-service-tzpcl Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0520 22:06:25.472674 24 runners.go:190] proxy-service-tzpcl Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0520 22:06:26.473342 24 runners.go:190] proxy-service-tzpcl Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0520 22:06:27.473739 24 runners.go:190] proxy-service-tzpcl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 20 22:06:27.476: INFO: setup took 5.067318566s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 20 22:06:27.480: INFO: (0) /api/v1/namespaces/proxy-6454/pods/proxy-service-tzpcl-gjz6b:162/proxy/: bar (200; 4.246583ms) May 20 22:06:27.480: INFO: (0) /api/v1/namespaces/proxy-6454/pods/proxy-service-tzpcl-gjz6b:1080/proxy/: test<... (200; 4.166436ms) May 20 22:06:27.480: INFO: (0) /api/v1/namespaces/proxy-6454/pods/proxy-service-tzpcl-gjz6b/proxy/: test (200; 4.284388ms) May 20 22:06:27.481: INFO: (0) /api/v1/namespaces/proxy-6454/pods/http:proxy-service-tzpcl-gjz6b:1080/proxy/: ... (200; 4.226746ms) May 20 22:06:27.481: INFO: (0) /api/v1/namespaces/proxy-6454/pods/http:proxy-service-tzpcl-gjz6b:162/proxy/: bar (200; 4.25856ms) May 20 22:06:27.481: INFO: (0) /api/v1/namespaces/proxy-6454/services/proxy-service-tzpcl:portname2/proxy/: bar (200; 4.466896ms) May 20 22:06:27.481: INFO: (0) /api/v1/namespaces/proxy-6454/pods/proxy-service-tzpcl-gjz6b:160/proxy/: foo (200; 4.403089ms) May 20 22:06:27.481: INFO: (0) /api/v1/namespaces/proxy-6454/pods/http:proxy-service-tzpcl-gjz6b:160/proxy/: foo (200; 4.29416ms) May 20 22:06:27.481: INFO: (0) /api/v1/namespaces/proxy-6454/services/http:proxy-service-tzpcl:portname1/proxy/: foo (200; 4.593952ms) May 20 22:06:27.481: INFO: (0) /api/v1/namespaces/proxy-6454/services/http:proxy-service-tzpcl:portname2/proxy/: bar (200; 4.489714ms) May 20 22:06:27.482: INFO: (0) /api/v1/namespaces/proxy-6454/services/proxy-service-tzpcl:portname1/proxy/: foo (200; 6.143677ms) May 20 22:06:27.483: INFO: (0) /api/v1/namespaces/proxy-6454/pods/https:proxy-service-tzpcl-gjz6b:443/proxy/: test<... (200; 2.457127ms) May 20 22:06:27.488: INFO: (1) /api/v1/namespaces/proxy-6454/pods/proxy-service-tzpcl-gjz6b:162/proxy/: bar (200; 2.994698ms) May 20 22:06:27.488: INFO: (1) /api/v1/namespaces/proxy-6454/pods/http:proxy-service-tzpcl-gjz6b:162/proxy/: bar (200; 2.991715ms) May 20 22:06:27.488: INFO: (1) /api/v1/namespaces/proxy-6454/pods/proxy-service-tzpcl-gjz6b:160/proxy/: foo (200; 3.012537ms) May 20 22:06:27.488: INFO: (1) /api/v1/namespaces/proxy-6454/pods/http:proxy-service-tzpcl-gjz6b:1080/proxy/: ... (200; 3.582182ms) May 20 22:06:27.488: INFO: (1) /api/v1/namespaces/proxy-6454/pods/http:proxy-service-tzpcl-gjz6b:160/proxy/: foo (200; 3.629275ms) May 20 22:06:27.488: INFO: (1) /api/v1/namespaces/proxy-6454/pods/proxy-service-tzpcl-gjz6b/proxy/: test (200; 3.608598ms) May 20 22:06:27.488: INFO: (1) /api/v1/namespaces/proxy-6454/pods/https:proxy-service-tzpcl-gjz6b:462/proxy/: tls qux (200; 3.586107ms) May 20 22:06:27.489: INFO: (1) /api/v1/namespaces/proxy-6454/services/http:proxy-service-tzpcl:portname2/proxy/: bar (200; 3.936457ms) May 20 22:06:27.489: INFO: (1) /api/v1/namespaces/proxy-6454/pods/https:proxy-service-tzpcl-gjz6b:460/proxy/: tls baz (200; 3.935265ms) May 20 22:06:27.489: INFO: (1) /api/v1/namespaces/proxy-6454/services/http:proxy-service-tzpcl:portname1/proxy/: foo (200; 4.188548ms) May 20 22:06:27.489: INFO: (1) /api/v1/namespaces/proxy-6454/services/proxy-service-tzpcl:portname1/proxy/: foo (200; 4.093253ms) May 20 22:06:27.489: INFO: (1) /api/v1/namespaces/proxy-6454/services/https:proxy-service-tzpcl:tlsportname2/proxy/: tls qux (200; 4.369772ms) May 20 22:06:27.489: INFO: (1) /api/v1/namespaces/proxy-6454/services/https:proxy-service-tzpcl:tlsportname1/proxy/: tls baz (200; 4.581125ms) May 20 22:06:27.489: INFO: (1) /api/v1/namespaces/proxy-6454/services/proxy-service-tzpcl:portname2/proxy/: bar (200; 4.358865ms) May 20 22:06:27.491: INFO: (2) /api/v1/namespaces/proxy-6454/pods/http:proxy-service-tzpcl-gjz6b:160/proxy/: foo (200; 2.258132ms) May 20 22:06:27.492: INFO: (2) /api/v1/namespaces/proxy-6454/pods/https:proxy-service-tzpcl-gjz6b:460/proxy/: tls baz (200; 2.398985ms) May 20 22:06:27.492: INFO: (2) /api/v1/namespaces/proxy-6454/pods/http:proxy-service-tzpcl-gjz6b:162/proxy/: bar (200; 2.512301ms) May 20 22:06:27.492: INFO: (2) /api/v1/namespaces/proxy-6454/pods/https:proxy-service-tzpcl-gjz6b:462/proxy/: tls qux (200; 2.515941ms) May 20 22:06:27.492: INFO: (2) /api/v1/namespaces/proxy-6454/pods/http:proxy-service-tzpcl-gjz6b:1080/proxy/: ... (200; 2.442752ms) May 20 22:06:27.492: INFO: (2) /api/v1/namespaces/proxy-6454/pods/proxy-service-tzpcl-gjz6b/proxy/: test (200; 2.603543ms) May 20 22:06:27.492: INFO: (2) /api/v1/namespaces/proxy-6454/pods/https:proxy-service-tzpcl-gjz6b:443/proxy/: test<... (200; 2.943839ms) May 20 22:06:27.492: INFO: (2) /api/v1/namespaces/proxy-6454/pods/proxy-service-tzpcl-gjz6b:160/proxy/: foo (200; 2.94548ms) May 20 22:06:27.492: INFO: (2) /api/v1/namespaces/proxy-6454/pods/proxy-service-tzpcl-gjz6b:162/proxy/: bar (200; 3.0467ms) May 20 22:06:27.493: INFO: (2) /api/v1/namespaces/proxy-6454/services/http:proxy-service-tzpcl:portname2/proxy/: bar (200; 3.326968ms) May 20 22:06:27.493: INFO: (2) /api/v1/namespaces/proxy-6454/services/http:proxy-service-tzpcl:portname1/proxy/: foo (200; 3.478577ms) May 20 22:06:27.493: INFO: (2) /api/v1/namespaces/proxy-6454/services/https:proxy-service-tzpcl:tlsportname1/proxy/: tls baz (200; 3.874243ms) May 20 22:06:27.493: INFO: (2) /api/v1/namespaces/proxy-6454/services/https:proxy-service-tzpcl:tlsportname2/proxy/: tls qux (200; 3.747347ms) May 20 22:06:27.493: INFO: (2) /api/v1/namespaces/proxy-6454/services/proxy-service-tzpcl:portname2/proxy/: bar (200; 3.991008ms) May 20 22:06:27.495: INFO: (3) /api/v1/namespaces/proxy-6454/pods/https:proxy-service-tzpcl-gjz6b:462/proxy/: tls qux (200; 1.951447ms) May 20 22:06:27.495: INFO: (3) /api/v1/namespaces/proxy-6454/pods/proxy-service-tzpcl-gjz6b:160/proxy/: foo (200; 1.942345ms) May 20 22:06:27.496: INFO: (3) /api/v1/namespaces/proxy-6454/pods/proxy-service-tzpcl-gjz6b/proxy/: test (200; 2.074158ms) May 20 22:06:27.496: INFO: (3) /api/v1/namespaces/proxy-6454/pods/https:proxy-service-tzpcl-gjz6b:443/proxy/: test<... (200; 2.946412ms) May 20 22:06:27.497: INFO: (3) /api/v1/namespaces/proxy-6454/pods/http:proxy-service-tzpcl-gjz6b:160/proxy/: foo (200; 2.908461ms) May 20 22:06:27.497: INFO: (3) /api/v1/namespaces/proxy-6454/services/http:proxy-service-tzpcl:portname1/proxy/: foo (200; 3.013847ms) May 20 22:06:27.497: INFO: (3) /api/v1/namespaces/proxy-6454/pods/http:proxy-service-tzpcl-gjz6b:1080/proxy/: ... (200; 3.071723ms) May 20 22:06:27.497: INFO: (3) /api/v1/namespaces/proxy-6454/pods/https:proxy-service-tzpcl-gjz6b:460/proxy/: tls baz (200; 3.092785ms) May 20 22:06:27.497: INFO: (3) /api/v1/namespaces/proxy-6454/services/https:proxy-service-tzpcl:tlsportname1/proxy/: tls baz (200; 3.462303ms) May 20 22:06:27.497: INFO: (3) /api/v1/namespaces/proxy-6454/services/http:proxy-service-tzpcl:portname2/proxy/: bar (200; 3.613077ms) May 20 22:06:27.497: INFO: (3) /api/v1/namespaces/proxy-6454/services/proxy-service-tzpcl:portname2/proxy/: bar (200; 3.663772ms) May 20 22:06:27.497: INFO: (3) /api/v1/namespaces/proxy-6454/services/https:proxy-service-tzpcl:tlsportname2/proxy/: tls qux (200; 3.64149ms) May 20 22:06:27.498: INFO: (3) /api/v1/namespaces/proxy-6454/services/proxy-service-tzpcl:portname1/proxy/: foo (200; 4.001155ms) May 20 22:06:27.500: INFO: (4) /api/v1/namespaces/proxy-6454/pods/proxy-service-tzpcl-gjz6b/proxy/: test (200; 1.949019ms) May 20 22:06:27.500: INFO: (4) /api/v1/namespaces/proxy-6454/pods/https:proxy-service-tzpcl-gjz6b:443/proxy/: test<... (200; 2.229201ms) May 20 22:06:27.500: INFO: (4) /api/v1/namespaces/proxy-6454/pods/https:proxy-service-tzpcl-gjz6b:460/proxy/: tls baz (200; 2.311876ms) May 20 22:06:27.501: INFO: (4) /api/v1/namespaces/proxy-6454/pods/http:proxy-service-tzpcl-gjz6b:1080/proxy/: ... (200; 2.618019ms) May 20 22:06:27.505: INFO: (4) /api/v1/namespaces/proxy-6454/pods/https:proxy-service-tzpcl-gjz6b:462/proxy/: tls qux (200; 6.601296ms) May 20 22:06:27.505: INFO: (4) /api/v1/namespaces/proxy-6454/pods/proxy-service-tzpcl-gjz6b:162/proxy/: bar (200; 6.72412ms) May 20 22:06:27.505: INFO: (4) /api/v1/namespaces/proxy-6454/services/http:proxy-service-tzpcl:portname2/proxy/: bar (200; 7.010322ms) May 20 22:06:27.505: INFO: (4) /api/v1/namespaces/proxy-6454/services/https:proxy-service-tzpcl:tlsportname2/proxy/: tls qux (200; 6.944904ms) May 20 22:06:27.505: INFO: (4) /api/v1/namespaces/proxy-6454/pods/http:proxy-service-tzpcl-gjz6b:162/proxy/: bar (200; 6.905457ms) May 20 22:06:27.505: INFO: (4) /api/v1/namespaces/proxy-6454/services/proxy-service-tzpcl:portname2/proxy/: bar (200; 6.95167ms) May 20 22:06:27.505: INFO: (4) /api/v1/namespaces/proxy-6454/services/https:proxy-service-tzpcl:tlsportname1/proxy/: tls baz (200; 6.893216ms) May 20 22:06:27.505: INFO: (4) /api/v1/namespaces/proxy-6454/services/proxy-service-tzpcl:portname1/proxy/: foo (200; 6.807391ms) May 20 22:06:27.505: INFO: (4) /api/v1/namespaces/proxy-6454/pods/http:proxy-service-tzpcl-gjz6b:160/proxy/: foo (200; 6.831977ms) May 20 22:06:27.505: INFO: (4) /api/v1/namespaces/proxy-6454/services/http:proxy-service-tzpcl:portname1/proxy/: foo (200; 6.897576ms) May 20 22:06:27.509: INFO: (5) /api/v1/namespaces/proxy-6454/pods/http:proxy-service-tzpcl-gjz6b:162/proxy/: bar (200; 3.893927ms) May 20 22:06:27.509: INFO: (5) /api/v1/namespaces/proxy-6454/pods/proxy-service-tzpcl-gjz6b:160/proxy/: foo (200; 4.022465ms) May 20 22:06:27.509: INFO: (5) /api/v1/namespaces/proxy-6454/pods/https:proxy-service-tzpcl-gjz6b:462/proxy/: tls qux (200; 3.830148ms) May 20 22:06:27.509: INFO: (5) /api/v1/namespaces/proxy-6454/pods/https:proxy-service-tzpcl-gjz6b:443/proxy/: test<... (200; 4.254757ms) May 20 22:06:27.509: INFO: (5) /api/v1/namespaces/proxy-6454/pods/https:proxy-service-tzpcl-gjz6b:460/proxy/: tls baz (200; 4.097349ms) May 20 22:06:27.510: INFO: (5) /api/v1/namespaces/proxy-6454/services/http:proxy-service-tzpcl:portname1/proxy/: foo (200; 4.484013ms) May 20 22:06:27.511: INFO: (5) /api/v1/namespaces/proxy-6454/services/http:proxy-service-tzpcl:portname2/proxy/: bar (200; 6.228665ms) May 20 22:06:27.516: INFO: (5) /api/v1/namespaces/proxy-6454/pods/proxy-service-tzpcl-gjz6b/proxy/: test (200; 10.896679ms) May 20 22:06:27.519: INFO: (5) /api/v1/namespaces/proxy-6454/services/https:proxy-service-tzpcl:tlsportname2/proxy/: tls qux (200; 13.591543ms) May 20 22:06:27.519: INFO: (5) /api/v1/namespaces/proxy-6454/pods/proxy-service-tzpcl-gjz6b:162/proxy/: bar (200; 13.861883ms) May 20 22:06:27.519: INFO: (5) /api/v1/namespaces/proxy-6454/services/proxy-service-tzpcl:portname2/proxy/: bar (200; 13.669652ms) May 20 22:06:27.519: INFO: (5) /api/v1/namespaces/proxy-6454/pods/http:proxy-service-tzpcl-gjz6b:1080/proxy/: ... (200; 13.891254ms) May 20 22:06:27.519: INFO: (5) /api/v1/namespaces/proxy-6454/services/proxy-service-tzpcl:portname1/proxy/: foo (200; 13.82401ms) May 20 22:06:27.519: INFO: (5) /api/v1/namespaces/proxy-6454/services/https:proxy-service-tzpcl:tlsportname1/proxy/: tls baz (200; 14.128905ms) May 20 22:06:27.522: INFO: (6) /api/v1/namespaces/proxy-6454/pods/http:proxy-service-tzpcl-gjz6b:162/proxy/: bar (200; 3.044355ms) May 20 22:06:27.522: INFO: (6) /api/v1/namespaces/proxy-6454/pods/http:proxy-service-tzpcl-gjz6b:160/proxy/: foo (200; 3.164597ms) May 20 22:06:27.522: INFO: (6) /api/v1/namespaces/proxy-6454/pods/https:proxy-service-tzpcl-gjz6b:443/proxy/: test<... (200; 3.192388ms) May 20 22:06:27.523: INFO: (6) /api/v1/namespaces/proxy-6454/pods/proxy-service-tzpcl-gjz6b:162/proxy/: bar (200; 3.248163ms) May 20 22:06:27.523: INFO: (6) /api/v1/namespaces/proxy-6454/pods/http:proxy-service-tzpcl-gjz6b:1080/proxy/: ... (200; 3.343273ms) May 20 22:06:27.523: INFO: (6) /api/v1/namespaces/proxy-6454/pods/https:proxy-service-tzpcl-gjz6b:462/proxy/: tls qux (200; 3.472561ms) May 20 22:06:27.523: INFO: (6) /api/v1/namespaces/proxy-6454/pods/proxy-service-tzpcl-gjz6b:160/proxy/: foo (200; 3.4024ms) May 20 22:06:27.523: INFO: (6) /api/v1/namespaces/proxy-6454/pods/proxy-service-tzpcl-gjz6b/proxy/: test (200; 3.376854ms) May 20 22:06:27.523: INFO: (6) /api/v1/namespaces/proxy-6454/services/proxy-service-tzpcl:portname2/proxy/: bar (200; 3.757724ms) May 20 22:06:27.523: INFO: (6) /api/v1/namespaces/proxy-6454/services/http:proxy-service-tzpcl:portname2/proxy/: bar (200; 3.567286ms) May 20 22:06:27.523: INFO: (6) /api/v1/namespaces/proxy-6454/services/proxy-service-tzpcl:portname1/proxy/: foo (200; 3.719316ms) May 20 22:06:27.523: INFO: (6) /api/v1/namespaces/proxy-6454/services/https:proxy-service-tzpcl:tlsportname2/proxy/: tls qux (200; 4.098609ms) May 20 22:06:27.523: INFO: (6) /api/v1/namespaces/proxy-6454/services/http:proxy-service-tzpcl:portname1/proxy/: foo (200; 4.107896ms) May 20 22:06:27.524: INFO: (6) /api/v1/namespaces/proxy-6454/services/https:proxy-service-tzpcl:tlsportname1/proxy/: tls baz (200; 4.245226ms) May 20 22:06:27.526: INFO: (7) /api/v1/namespaces/proxy-6454/pods/http:proxy-service-tzpcl-gjz6b:1080/proxy/: ... (200; 2.622374ms) May 20 22:06:27.526: INFO: (7) /api/v1/namespaces/proxy-6454/pods/proxy-service-tzpcl-gjz6b/proxy/: test (200; 2.712479ms) May 20 22:06:27.526: INFO: (7) /api/v1/namespaces/proxy-6454/pods/proxy-service-tzpcl-gjz6b:160/proxy/: foo (200; 2.634255ms) May 20 22:06:27.526: INFO: (7) /api/v1/namespaces/proxy-6454/pods/https:proxy-service-tzpcl-gjz6b:443/proxy/: test<... (200; 2.801226ms) May 20 22:06:27.527: INFO: (7) /api/v1/namespaces/proxy-6454/pods/http:proxy-service-tzpcl-gjz6b:162/proxy/: bar (200; 2.877339ms) May 20 22:06:27.527: INFO: (7) /api/v1/namespaces/proxy-6454/services/proxy-service-tzpcl:portname2/proxy/: bar (200; 2.980205ms) May 20 22:06:27.527: INFO: (7) /api/v1/namespaces/proxy-6454/pods/http:proxy-service-tzpcl-gjz6b:160/proxy/: foo (200; 3.318737ms) May 20 22:06:27.527: INFO: (7) /api/v1/namespaces/proxy-6454/pods/proxy-service-tzpcl-gjz6b:162/proxy/: bar (200; 3.609264ms) May 20 22:06:27.527: INFO: (7) /api/v1/namespaces/proxy-6454/pods/https:proxy-service-tzpcl-gjz6b:460/proxy/: tls baz (200; 3.592076ms) May 20 22:06:27.527: INFO: (7) /api/v1/namespaces/proxy-6454/services/https:proxy-service-tzpcl:tlsportname2/proxy/: tls qux (200; 3.719149ms) May 20 22:06:27.527: INFO: (7) /api/v1/namespaces/proxy-6454/pods/https:proxy-service-tzpcl-gjz6b:462/proxy/: tls qux (200; 3.702221ms) May 20 22:06:27.527: INFO: (7) /api/v1/namespaces/proxy-6454/services/proxy-service-tzpcl:portname1/proxy/: foo (200; 3.869788ms) May 20 22:06:27.528: INFO: (7) /api/v1/namespaces/proxy-6454/services/https:proxy-service-tzpcl:tlsportname1/proxy/: tls baz (200; 3.751003ms) May 20 22:06:27.528: INFO: (7) /api/v1/namespaces/proxy-6454/services/http:proxy-service-tzpcl:portname2/proxy/: bar (200; 3.959919ms) May 20 22:06:27.528: INFO: (7) /api/v1/namespaces/proxy-6454/services/http:proxy-service-tzpcl:portname1/proxy/: foo (200; 4.139222ms) May 20 22:06:27.530: INFO: (8) /api/v1/namespaces/proxy-6454/pods/proxy-service-tzpcl-gjz6b/proxy/: test (200; 2.390375ms) May 20 22:06:27.531: INFO: (8) /api/v1/namespaces/proxy-6454/pods/http:proxy-service-tzpcl-gjz6b:1080/proxy/: ... (200; 2.598141ms) May 20 22:06:27.531: INFO: (8) /api/v1/namespaces/proxy-6454/pods/https:proxy-service-tzpcl-gjz6b:443/proxy/: test<... (200; 3.307279ms) May 20 22:06:27.531: INFO: (8) /api/v1/namespaces/proxy-6454/pods/proxy-service-tzpcl-gjz6b:162/proxy/: bar (200; 3.253066ms) May 20 22:06:27.532: INFO: (8) /api/v1/namespaces/proxy-6454/services/proxy-service-tzpcl:portname1/proxy/: foo (200; 3.549527ms) May 20 22:06:27.532: INFO: (8) /api/v1/namespaces/proxy-6454/services/http:proxy-service-tzpcl:portname1/proxy/: foo (200; 3.71225ms) May 20 22:06:27.532: INFO: (8) /api/v1/namespaces/proxy-6454/services/https:proxy-service-tzpcl:tlsportname1/proxy/: tls baz (200; 3.697306ms) May 20 22:06:27.532: INFO: (8) /api/v1/namespaces/proxy-6454/services/http:proxy-service-tzpcl:portname2/proxy/: bar (200; 3.904574ms) May 20 22:06:27.532: INFO: (8) /api/v1/namespaces/proxy-6454/services/https:proxy-service-tzpcl:tlsportname2/proxy/: tls qux (200; 4.210881ms) May 20 22:06:27.535: INFO: (9) /api/v1/namespaces/proxy-6454/pods/proxy-service-tzpcl-gjz6b:1080/proxy/: test<... (200; 2.046548ms) May 20 22:06:27.535: INFO: (9) /api/v1/namespaces/proxy-6454/pods/https:proxy-service-tzpcl-gjz6b:462/proxy/: tls qux (200; 2.253962ms) May 20 22:06:27.535: INFO: (9) /api/v1/namespaces/proxy-6454/pods/proxy-service-tzpcl-gjz6b/proxy/: test (200; 2.199941ms) May 20 22:06:27.535: INFO: (9) /api/v1/namespaces/proxy-6454/pods/http:proxy-service-tzpcl-gjz6b:1080/proxy/: ... (200; 2.3901ms) May 20 22:06:27.535: INFO: (9) /api/v1/namespaces/proxy-6454/pods/https:proxy-service-tzpcl-gjz6b:460/proxy/: tls baz (200; 2.460948ms) May 20 22:06:27.535: INFO: (9) /api/v1/namespaces/proxy-6454/pods/proxy-service-tzpcl-gjz6b:160/proxy/: foo (200; 2.814763ms) May 20 22:06:27.536: INFO: (9) /api/v1/namespaces/proxy-6454/pods/http:proxy-service-tzpcl-gjz6b:160/proxy/: foo (200; 3.214387ms) May 20 22:06:27.536: INFO: (9) /api/v1/namespaces/proxy-6454/pods/proxy-service-tzpcl-gjz6b:162/proxy/: bar (200; 3.10767ms) May 20 22:06:27.536: INFO: (9) /api/v1/namespaces/proxy-6454/pods/http:proxy-service-tzpcl-gjz6b:162/proxy/: bar (200; 3.211996ms) May 20 22:06:27.536: INFO: (9) /api/v1/namespaces/proxy-6454/pods/https:proxy-service-tzpcl-gjz6b:443/proxy/: test (200; 2.327486ms) May 20 22:06:27.539: INFO: (10) /api/v1/namespaces/proxy-6454/pods/https:proxy-service-tzpcl-gjz6b:460/proxy/: tls baz (200; 2.291914ms) May 20 22:06:27.539: INFO: (10) /api/v1/namespaces/proxy-6454/pods/http:proxy-service-tzpcl-gjz6b:1080/proxy/: ... (200; 2.160198ms) May 20 22:06:27.539: INFO: (10) /api/v1/namespaces/proxy-6454/pods/https:proxy-service-tzpcl-gjz6b:443/proxy/: test<... (200; 2.890545ms) May 20 22:06:27.540: INFO: (10) /api/v1/namespaces/proxy-6454/pods/proxy-service-tzpcl-gjz6b:160/proxy/: foo (200; 3.142252ms) May 20 22:06:27.540: INFO: (10) /api/v1/namespaces/proxy-6454/services/proxy-service-tzpcl:portname1/proxy/: foo (200; 3.423385ms) May 20 22:06:27.541: INFO: (10) /api/v1/namespaces/proxy-6454/services/http:proxy-service-tzpcl:portname2/proxy/: bar (200; 3.391288ms) May 20 22:06:27.541: INFO: (10) /api/v1/namespaces/proxy-6454/services/https:proxy-service-tzpcl:tlsportname1/proxy/: tls baz (200; 3.855006ms) May 20 22:06:27.541: INFO: (10) /api/v1/namespaces/proxy-6454/services/http:proxy-service-tzpcl:portname1/proxy/: foo (200; 3.819599ms) May 20 22:06:27.541: INFO: (10) /api/v1/namespaces/proxy-6454/services/https:proxy-service-tzpcl:tlsportname2/proxy/: tls qux (200; 3.717105ms) May 20 22:06:27.541: INFO: (10) /api/v1/namespaces/proxy-6454/services/proxy-service-tzpcl:portname2/proxy/: bar (200; 4.095955ms) May 20 22:06:27.543: INFO: (11) /api/v1/namespaces/proxy-6454/pods/proxy-service-tzpcl-gjz6b/proxy/: test (200; 2.033869ms) May 20 22:06:27.544: INFO: (11) /api/v1/namespaces/proxy-6454/pods/https:proxy-service-tzpcl-gjz6b:460/proxy/: tls baz (200; 2.516698ms) May 20 22:06:27.544: INFO: (11) /api/v1/namespaces/proxy-6454/pods/proxy-service-tzpcl-gjz6b:1080/proxy/: test<... (200; 2.778938ms) May 20 22:06:27.544: INFO: (11) /api/v1/namespaces/proxy-6454/pods/https:proxy-service-tzpcl-gjz6b:462/proxy/: tls qux (200; 2.681764ms) May 20 22:06:27.544: INFO: (11) /api/v1/namespaces/proxy-6454/pods/proxy-service-tzpcl-gjz6b:162/proxy/: bar (200; 2.739617ms) May 20 22:06:27.544: INFO: (11) /api/v1/namespaces/proxy-6454/services/http:proxy-service-tzpcl:portname2/proxy/: bar (200; 3.059231ms) May 20 22:06:27.544: INFO: (11) /api/v1/namespaces/proxy-6454/pods/http:proxy-service-tzpcl-gjz6b:160/proxy/: foo (200; 2.922312ms) May 20 22:06:27.545: INFO: (11) /api/v1/namespaces/proxy-6454/pods/http:proxy-service-tzpcl-gjz6b:1080/proxy/: ... (200; 3.09105ms) May 20 22:06:27.545: INFO: (11) /api/v1/namespaces/proxy-6454/pods/https:proxy-service-tzpcl-gjz6b:443/proxy/: test (200; 2.366593ms) May 20 22:06:27.548: INFO: (12) /api/v1/namespaces/proxy-6454/pods/http:proxy-service-tzpcl-gjz6b:162/proxy/: bar (200; 2.387528ms) May 20 22:06:27.548: INFO: (12) /api/v1/namespaces/proxy-6454/pods/proxy-service-tzpcl-gjz6b:162/proxy/: bar (200; 2.437449ms) May 20 22:06:27.549: INFO: (12) /api/v1/namespaces/proxy-6454/pods/https:proxy-service-tzpcl-gjz6b:462/proxy/: tls qux (200; 2.567647ms) May 20 22:06:27.549: INFO: (12) /api/v1/namespaces/proxy-6454/pods/http:proxy-service-tzpcl-gjz6b:1080/proxy/: ... (200; 2.958682ms) May 20 22:06:27.549: INFO: (12) /api/v1/namespaces/proxy-6454/pods/proxy-service-tzpcl-gjz6b:1080/proxy/: test<... (200; 2.999971ms) May 20 22:06:27.549: INFO: (12) /api/v1/namespaces/proxy-6454/services/http:proxy-service-tzpcl:portname1/proxy/: foo (200; 3.206783ms) May 20 22:06:27.549: INFO: (12) /api/v1/namespaces/proxy-6454/pods/https:proxy-service-tzpcl-gjz6b:443/proxy/: ... (200; 2.768038ms) May 20 22:06:27.553: INFO: (13) /api/v1/namespaces/proxy-6454/pods/proxy-service-tzpcl-gjz6b/proxy/: test (200; 2.830686ms) May 20 22:06:27.553: INFO: (13) /api/v1/namespaces/proxy-6454/pods/proxy-service-tzpcl-gjz6b:1080/proxy/: test<... (200; 2.764555ms) May 20 22:06:27.554: INFO: (13) /api/v1/namespaces/proxy-6454/pods/proxy-service-tzpcl-gjz6b:160/proxy/: foo (200; 3.072591ms) May 20 22:06:27.554: INFO: (13) /api/v1/namespaces/proxy-6454/pods/http:proxy-service-tzpcl-gjz6b:160/proxy/: foo (200; 3.181911ms) May 20 22:06:27.554: INFO: (13) /api/v1/namespaces/proxy-6454/services/http:proxy-service-tzpcl:portname2/proxy/: bar (200; 3.393006ms) May 20 22:06:27.554: INFO: (13) /api/v1/namespaces/proxy-6454/pods/https:proxy-service-tzpcl-gjz6b:443/proxy/: ... (200; 2.372463ms) May 20 22:06:27.557: INFO: (14) /api/v1/namespaces/proxy-6454/pods/https:proxy-service-tzpcl-gjz6b:460/proxy/: tls baz (200; 2.576274ms) May 20 22:06:27.558: INFO: (14) /api/v1/namespaces/proxy-6454/pods/proxy-service-tzpcl-gjz6b/proxy/: test (200; 2.740814ms) May 20 22:06:27.558: INFO: (14) /api/v1/namespaces/proxy-6454/pods/proxy-service-tzpcl-gjz6b:1080/proxy/: test<... (200; 2.877168ms) May 20 22:06:27.558: INFO: (14) /api/v1/namespaces/proxy-6454/pods/proxy-service-tzpcl-gjz6b:160/proxy/: foo (200; 2.672791ms) May 20 22:06:27.558: INFO: (14) /api/v1/namespaces/proxy-6454/pods/http:proxy-service-tzpcl-gjz6b:160/proxy/: foo (200; 2.732094ms) May 20 22:06:27.558: INFO: (14) /api/v1/namespaces/proxy-6454/pods/http:proxy-service-tzpcl-gjz6b:162/proxy/: bar (200; 2.824192ms) May 20 22:06:27.558: INFO: (14) /api/v1/namespaces/proxy-6454/pods/https:proxy-service-tzpcl-gjz6b:443/proxy/: test (200; 2.289349ms) May 20 22:06:27.561: INFO: (15) /api/v1/namespaces/proxy-6454/pods/proxy-service-tzpcl-gjz6b:160/proxy/: foo (200; 2.22912ms) May 20 22:06:27.562: INFO: (15) /api/v1/namespaces/proxy-6454/pods/http:proxy-service-tzpcl-gjz6b:1080/proxy/: ... (200; 2.665302ms) May 20 22:06:27.562: INFO: (15) /api/v1/namespaces/proxy-6454/pods/proxy-service-tzpcl-gjz6b:1080/proxy/: test<... (200; 2.715066ms) May 20 22:06:27.562: INFO: (15) /api/v1/namespaces/proxy-6454/pods/https:proxy-service-tzpcl-gjz6b:443/proxy/: test (200; 2.73852ms) May 20 22:06:27.566: INFO: (16) /api/v1/namespaces/proxy-6454/pods/proxy-service-tzpcl-gjz6b:1080/proxy/: test<... (200; 2.675482ms) May 20 22:06:27.567: INFO: (16) /api/v1/namespaces/proxy-6454/pods/proxy-service-tzpcl-gjz6b:162/proxy/: bar (200; 2.953109ms) May 20 22:06:27.567: INFO: (16) /api/v1/namespaces/proxy-6454/pods/http:proxy-service-tzpcl-gjz6b:1080/proxy/: ... (200; 3.009524ms) May 20 22:06:27.567: INFO: (16) /api/v1/namespaces/proxy-6454/services/https:proxy-service-tzpcl:tlsportname1/proxy/: tls baz (200; 3.191124ms) May 20 22:06:27.567: INFO: (16) /api/v1/namespaces/proxy-6454/pods/http:proxy-service-tzpcl-gjz6b:162/proxy/: bar (200; 3.016758ms) May 20 22:06:27.567: INFO: (16) /api/v1/namespaces/proxy-6454/services/http:proxy-service-tzpcl:portname2/proxy/: bar (200; 3.400078ms) May 20 22:06:27.567: INFO: (16) /api/v1/namespaces/proxy-6454/services/https:proxy-service-tzpcl:tlsportname2/proxy/: tls qux (200; 3.717005ms) May 20 22:06:27.567: INFO: (16) /api/v1/namespaces/proxy-6454/pods/https:proxy-service-tzpcl-gjz6b:462/proxy/: tls qux (200; 3.512373ms) May 20 22:06:27.567: INFO: (16) /api/v1/namespaces/proxy-6454/services/http:proxy-service-tzpcl:portname1/proxy/: foo (200; 3.685864ms) May 20 22:06:27.567: INFO: (16) /api/v1/namespaces/proxy-6454/services/proxy-service-tzpcl:portname2/proxy/: bar (200; 3.665011ms) May 20 22:06:27.568: INFO: (16) /api/v1/namespaces/proxy-6454/services/proxy-service-tzpcl:portname1/proxy/: foo (200; 4.235325ms) May 20 22:06:27.571: INFO: (17) /api/v1/namespaces/proxy-6454/pods/proxy-service-tzpcl-gjz6b/proxy/: test (200; 2.663767ms) May 20 22:06:27.571: INFO: (17) /api/v1/namespaces/proxy-6454/pods/https:proxy-service-tzpcl-gjz6b:462/proxy/: tls qux (200; 2.539294ms) May 20 22:06:27.571: INFO: (17) /api/v1/namespaces/proxy-6454/pods/http:proxy-service-tzpcl-gjz6b:1080/proxy/: ... (200; 2.65195ms) May 20 22:06:27.572: INFO: (17) /api/v1/namespaces/proxy-6454/pods/http:proxy-service-tzpcl-gjz6b:160/proxy/: foo (200; 4.019705ms) May 20 22:06:27.572: INFO: (17) /api/v1/namespaces/proxy-6454/pods/proxy-service-tzpcl-gjz6b:162/proxy/: bar (200; 3.954307ms) May 20 22:06:27.572: INFO: (17) /api/v1/namespaces/proxy-6454/pods/https:proxy-service-tzpcl-gjz6b:460/proxy/: tls baz (200; 4.038119ms) May 20 22:06:27.573: INFO: (17) /api/v1/namespaces/proxy-6454/pods/proxy-service-tzpcl-gjz6b:1080/proxy/: test<... (200; 4.178886ms) May 20 22:06:27.573: INFO: (17) /api/v1/namespaces/proxy-6454/pods/proxy-service-tzpcl-gjz6b:160/proxy/: foo (200; 4.36802ms) May 20 22:06:27.573: INFO: (17) /api/v1/namespaces/proxy-6454/pods/https:proxy-service-tzpcl-gjz6b:443/proxy/: test (200; 2.185763ms) May 20 22:06:27.577: INFO: (18) /api/v1/namespaces/proxy-6454/pods/http:proxy-service-tzpcl-gjz6b:160/proxy/: foo (200; 2.38888ms) May 20 22:06:27.578: INFO: (18) /api/v1/namespaces/proxy-6454/pods/proxy-service-tzpcl-gjz6b:162/proxy/: bar (200; 2.839988ms) May 20 22:06:27.578: INFO: (18) /api/v1/namespaces/proxy-6454/pods/http:proxy-service-tzpcl-gjz6b:162/proxy/: bar (200; 3.043779ms) May 20 22:06:27.578: INFO: (18) /api/v1/namespaces/proxy-6454/pods/https:proxy-service-tzpcl-gjz6b:460/proxy/: tls baz (200; 2.914666ms) May 20 22:06:27.578: INFO: (18) /api/v1/namespaces/proxy-6454/pods/proxy-service-tzpcl-gjz6b:160/proxy/: foo (200; 2.994254ms) May 20 22:06:27.578: INFO: (18) /api/v1/namespaces/proxy-6454/pods/proxy-service-tzpcl-gjz6b:1080/proxy/: test<... (200; 3.00152ms) May 20 22:06:27.578: INFO: (18) /api/v1/namespaces/proxy-6454/services/http:proxy-service-tzpcl:portname2/proxy/: bar (200; 3.380759ms) May 20 22:06:27.578: INFO: (18) /api/v1/namespaces/proxy-6454/services/https:proxy-service-tzpcl:tlsportname2/proxy/: tls qux (200; 3.599905ms) May 20 22:06:27.579: INFO: (18) /api/v1/namespaces/proxy-6454/pods/https:proxy-service-tzpcl-gjz6b:443/proxy/: ... (200; 4.405661ms) May 20 22:06:27.580: INFO: (18) /api/v1/namespaces/proxy-6454/pods/https:proxy-service-tzpcl-gjz6b:462/proxy/: tls qux (200; 4.891847ms) May 20 22:06:27.580: INFO: (18) /api/v1/namespaces/proxy-6454/services/http:proxy-service-tzpcl:portname1/proxy/: foo (200; 5.642891ms) May 20 22:06:27.581: INFO: (18) /api/v1/namespaces/proxy-6454/services/https:proxy-service-tzpcl:tlsportname1/proxy/: tls baz (200; 5.846435ms) May 20 22:06:27.581: INFO: (18) /api/v1/namespaces/proxy-6454/services/proxy-service-tzpcl:portname2/proxy/: bar (200; 5.863809ms) May 20 22:06:27.581: INFO: (18) /api/v1/namespaces/proxy-6454/services/proxy-service-tzpcl:portname1/proxy/: foo (200; 6.426365ms) May 20 22:06:27.583: INFO: (19) /api/v1/namespaces/proxy-6454/pods/proxy-service-tzpcl-gjz6b:1080/proxy/: test<... (200; 2.223805ms) May 20 22:06:27.584: INFO: (19) /api/v1/namespaces/proxy-6454/pods/https:proxy-service-tzpcl-gjz6b:462/proxy/: tls qux (200; 2.483651ms) May 20 22:06:27.584: INFO: (19) /api/v1/namespaces/proxy-6454/pods/proxy-service-tzpcl-gjz6b:160/proxy/: foo (200; 2.852992ms) May 20 22:06:27.584: INFO: (19) /api/v1/namespaces/proxy-6454/pods/proxy-service-tzpcl-gjz6b/proxy/: test (200; 2.80714ms) May 20 22:06:27.584: INFO: (19) /api/v1/namespaces/proxy-6454/pods/http:proxy-service-tzpcl-gjz6b:1080/proxy/: ... (200; 3.014147ms) May 20 22:06:27.584: INFO: (19) /api/v1/namespaces/proxy-6454/pods/http:proxy-service-tzpcl-gjz6b:162/proxy/: bar (200; 3.010781ms) May 20 22:06:27.584: INFO: (19) /api/v1/namespaces/proxy-6454/pods/https:proxy-service-tzpcl-gjz6b:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting a starting resourceVersion STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:06:37.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2922" for this suite. • [SLOW TEST:5.209 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":-1,"completed":27,"skipped":518,"failed":0} S ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:06:36.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 20 22:06:36.960: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"b4ee083a-cf12-4278-8e35-20c2945bb157", Controller:(*bool)(0xc00483ec32), BlockOwnerDeletion:(*bool)(0xc00483ec33)}} May 20 22:06:36.964: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"2ba0e05a-25e6-4387-b830-bf220f41e897", Controller:(*bool)(0xc00483eeaa), BlockOwnerDeletion:(*bool)(0xc00483eeab)}} May 20 22:06:36.967: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"10f858b0-2e58-42b9-9a9a-93ed97cc7619", Controller:(*bool)(0xc00483f12a), BlockOwnerDeletion:(*bool)(0xc00483f12b)}} [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:06:41.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1100" for this suite. • [SLOW TEST:5.078 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":-1,"completed":18,"skipped":337,"failed":0} S ------------------------------ [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:06:41.989: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod with the kernel.shm_rmid_forced sysctl STEP: Watching for error events or started pod STEP: Waiting for pod completion STEP: Checking that the pod succeeded STEP: Getting logs from the pod STEP: Checking that the sysctl is actually updated [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:06:50.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-3380" for this suite. • [SLOW TEST:8.065 seconds] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":19,"skipped":338,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:06:36.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 20 22:06:37.098: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 20 22:06:39.108: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681197, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681197, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681197, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681197, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 20 22:06:41.112: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681197, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681197, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681197, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681197, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 20 22:06:43.113: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681197, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681197, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681197, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681197, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 20 22:06:46.122: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 20 22:06:46.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:06:54.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1164" for this suite. STEP: Destroying namespace "webhook-1164-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:17.498 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":-1,"completed":22,"skipped":425,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:06:54.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test override all May 20 22:06:54.343: INFO: Waiting up to 5m0s for pod "client-containers-87282a00-39f4-4c95-8b1d-3c44c198506c" in namespace "containers-4943" to be "Succeeded or Failed" May 20 22:06:54.345: INFO: Pod "client-containers-87282a00-39f4-4c95-8b1d-3c44c198506c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.251404ms May 20 22:06:56.349: INFO: Pod "client-containers-87282a00-39f4-4c95-8b1d-3c44c198506c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005985002s May 20 22:06:58.352: INFO: Pod "client-containers-87282a00-39f4-4c95-8b1d-3c44c198506c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009029704s STEP: Saw pod success May 20 22:06:58.352: INFO: Pod "client-containers-87282a00-39f4-4c95-8b1d-3c44c198506c" satisfied condition "Succeeded or Failed" May 20 22:06:58.356: INFO: Trying to get logs from node node2 pod client-containers-87282a00-39f4-4c95-8b1d-3c44c198506c container agnhost-container: STEP: delete the pod May 20 22:06:58.379: INFO: Waiting for pod client-containers-87282a00-39f4-4c95-8b1d-3c44c198506c to disappear May 20 22:06:58.381: INFO: Pod client-containers-87282a00-39f4-4c95-8b1d-3c44c198506c no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:06:58.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4943" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":444,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:06:37.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 20 22:06:37.438: INFO: The status of Pod test-webserver-fe3f083a-08bc-4392-8707-615668d913cd is Pending, waiting for it to be Running (with Ready = true) May 20 22:06:39.442: INFO: The status of Pod test-webserver-fe3f083a-08bc-4392-8707-615668d913cd is Pending, waiting for it to be Running (with Ready = true) May 20 22:06:41.441: INFO: The status of Pod test-webserver-fe3f083a-08bc-4392-8707-615668d913cd is Running (Ready = false) May 20 22:06:43.442: INFO: The status of Pod test-webserver-fe3f083a-08bc-4392-8707-615668d913cd is Running (Ready = false) May 20 22:06:45.442: INFO: The status of Pod test-webserver-fe3f083a-08bc-4392-8707-615668d913cd is Running (Ready = false) May 20 22:06:47.441: INFO: The status of Pod test-webserver-fe3f083a-08bc-4392-8707-615668d913cd is Running (Ready = false) May 20 22:06:49.442: INFO: The status of Pod test-webserver-fe3f083a-08bc-4392-8707-615668d913cd is Running (Ready = false) May 20 22:06:51.441: INFO: The status of Pod test-webserver-fe3f083a-08bc-4392-8707-615668d913cd is Running (Ready = false) May 20 22:06:53.441: INFO: The status of Pod test-webserver-fe3f083a-08bc-4392-8707-615668d913cd is Running (Ready = false) May 20 22:06:55.442: INFO: The status of Pod test-webserver-fe3f083a-08bc-4392-8707-615668d913cd is Running (Ready = false) May 20 22:06:57.442: INFO: The status of Pod test-webserver-fe3f083a-08bc-4392-8707-615668d913cd is Running (Ready = false) May 20 22:06:59.442: INFO: The status of Pod test-webserver-fe3f083a-08bc-4392-8707-615668d913cd is Running (Ready = true) May 20 22:06:59.444: INFO: Container started at 2022-05-20 22:06:40 +0000 UTC, pod became ready at 2022-05-20 22:06:57 +0000 UTC [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:06:59.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9813" for this suite. • [SLOW TEST:22.050 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":519,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:06:59.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars May 20 22:06:59.511: INFO: Waiting up to 5m0s for pod "downward-api-9df17f54-f44f-4a00-ab56-a82c394d44d5" in namespace "downward-api-2518" to be "Succeeded or Failed" May 20 22:06:59.514: INFO: Pod "downward-api-9df17f54-f44f-4a00-ab56-a82c394d44d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.15859ms May 20 22:07:01.518: INFO: Pod "downward-api-9df17f54-f44f-4a00-ab56-a82c394d44d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006136122s May 20 22:07:03.521: INFO: Pod "downward-api-9df17f54-f44f-4a00-ab56-a82c394d44d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009942611s STEP: Saw pod success May 20 22:07:03.521: INFO: Pod "downward-api-9df17f54-f44f-4a00-ab56-a82c394d44d5" satisfied condition "Succeeded or Failed" May 20 22:07:03.523: INFO: Trying to get logs from node node2 pod downward-api-9df17f54-f44f-4a00-ab56-a82c394d44d5 container dapi-container: STEP: delete the pod May 20 22:07:03.535: INFO: Waiting for pod downward-api-9df17f54-f44f-4a00-ab56-a82c394d44d5 to disappear May 20 22:07:03.537: INFO: Pod downward-api-9df17f54-f44f-4a00-ab56-a82c394d44d5 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:07:03.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2518" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":528,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:07:03.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should have Endpoints and EndpointSlices pointing to API Server [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:07:03.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-6905" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","total":-1,"completed":30,"skipped":542,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:06:58.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 20 22:06:58.439: INFO: Pod name pod-release: Found 0 pods out of 1 May 20 22:07:03.443: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:07:04.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7118" for this suite. • [SLOW TEST:6.058 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":-1,"completed":24,"skipped":451,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:07:04.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:07:17.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-452" for this suite. • [SLOW TEST:13.096 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":-1,"completed":25,"skipped":459,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:07:17.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-map-d67f0c38-be71-44ff-b5f3-f2a0f73ac32b STEP: Creating a pod to test consume configMaps May 20 22:07:17.706: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-899b7784-037d-4754-82c5-9371ee72e41b" in namespace "projected-4690" to be "Succeeded or Failed" May 20 22:07:17.708: INFO: Pod "pod-projected-configmaps-899b7784-037d-4754-82c5-9371ee72e41b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054039ms May 20 22:07:19.714: INFO: Pod "pod-projected-configmaps-899b7784-037d-4754-82c5-9371ee72e41b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007686479s May 20 22:07:21.718: INFO: Pod "pod-projected-configmaps-899b7784-037d-4754-82c5-9371ee72e41b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011121931s STEP: Saw pod success May 20 22:07:21.718: INFO: Pod "pod-projected-configmaps-899b7784-037d-4754-82c5-9371ee72e41b" satisfied condition "Succeeded or Failed" May 20 22:07:21.720: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-899b7784-037d-4754-82c5-9371ee72e41b container agnhost-container: STEP: delete the pod May 20 22:07:21.738: INFO: Waiting for pod pod-projected-configmaps-899b7784-037d-4754-82c5-9371ee72e41b to disappear May 20 22:07:21.740: INFO: Pod pod-projected-configmaps-899b7784-037d-4754-82c5-9371ee72e41b no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:07:21.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4690" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":493,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:03:15.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod busybox-f53e4789-dd1e-4225-b414-0de17c36b8d8 in namespace container-probe-4603 May 20 22:03:23.344: INFO: Started pod busybox-f53e4789-dd1e-4225-b414-0de17c36b8d8 in namespace container-probe-4603 STEP: checking the pod's current state and verifying that restartCount is present May 20 22:03:23.347: INFO: Initial restart count of pod busybox-f53e4789-dd1e-4225-b414-0de17c36b8d8 is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:07:23.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4603" for this suite. • [SLOW TEST:248.544 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":84,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:07:03.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostport STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/hostport.go:47 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to create a pod(pod1) with hostport 54323 and hostIP 127.0.0.1 and expect scheduled May 20 22:07:03.688: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) May 20 22:07:05.693: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) May 20 22:07:07.691: INFO: The status of Pod pod1 is Running (Ready = true) STEP: Trying to create another pod(pod2) with hostport 54323 but hostIP 10.10.190.207 on the node which pod1 resides and expect scheduled May 20 22:07:07.706: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) May 20 22:07:09.710: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) May 20 22:07:11.710: INFO: The status of Pod pod2 is Running (Ready = true) STEP: Trying to create a third pod(pod3) with hostport 54323, hostIP 10.10.190.207 but use UDP protocol on the node which pod2 resides May 20 22:07:11.722: INFO: The status of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) May 20 22:07:13.727: INFO: The status of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) May 20 22:07:15.727: INFO: The status of Pod pod3 is Running (Ready = true) May 20 22:07:15.741: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) May 20 22:07:17.746: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) May 20 22:07:19.744: INFO: The status of Pod e2e-host-exec is Running (Ready = true) STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54323 May 20 22:07:19.747: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 10.10.190.207 http://127.0.0.1:54323/hostname] Namespace:hostport-8498 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 22:07:19.747: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 10.10.190.207, port: 54323 May 20 22:07:19.874: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://10.10.190.207:54323/hostname] Namespace:hostport-8498 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 22:07:19.874: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 10.10.190.207, port: 54323 UDP May 20 22:07:19.988: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 10.10.190.207 54323] Namespace:hostport-8498 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 22:07:19.988: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:07:25.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostport-8498" for this suite. • [SLOW TEST:21.481 seconds] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]","total":-1,"completed":31,"skipped":556,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:07:23.905: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-map-8d5275bc-af52-446c-8d85-95fe144a8155 STEP: Creating a pod to test consume configMaps May 20 22:07:23.946: INFO: Waiting up to 5m0s for pod "pod-configmaps-5abb2c75-6d1e-42c6-8328-638631bc767d" in namespace "configmap-7244" to be "Succeeded or Failed" May 20 22:07:23.950: INFO: Pod "pod-configmaps-5abb2c75-6d1e-42c6-8328-638631bc767d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.08287ms May 20 22:07:25.954: INFO: Pod "pod-configmaps-5abb2c75-6d1e-42c6-8328-638631bc767d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007912182s May 20 22:07:27.958: INFO: Pod "pod-configmaps-5abb2c75-6d1e-42c6-8328-638631bc767d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012590356s STEP: Saw pod success May 20 22:07:27.958: INFO: Pod "pod-configmaps-5abb2c75-6d1e-42c6-8328-638631bc767d" satisfied condition "Succeeded or Failed" May 20 22:07:27.961: INFO: Trying to get logs from node node2 pod pod-configmaps-5abb2c75-6d1e-42c6-8328-638631bc767d container agnhost-container: STEP: delete the pod May 20 22:07:27.978: INFO: Waiting for pod pod-configmaps-5abb2c75-6d1e-42c6-8328-638631bc767d to disappear May 20 22:07:27.980: INFO: Pod pod-configmaps-5abb2c75-6d1e-42c6-8328-638631bc767d no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:07:27.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7244" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":113,"failed":0} S ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:05:01.887: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-4226 STEP: creating service affinity-nodeport in namespace services-4226 STEP: creating replication controller affinity-nodeport in namespace services-4226 I0520 22:05:01.920399 35 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-4226, replica count: 3 I0520 22:05:04.972933 35 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0520 22:05:07.973105 35 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 20 22:05:07.983: INFO: Creating new exec pod May 20 22:05:15.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80' May 20 22:05:15.675: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport 80\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\n" May 20 22:05:15.675: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" May 20 22:05:15.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.49.125 80' May 20 22:05:15.986: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.49.125 80\nConnection to 10.233.49.125 80 port [tcp/http] succeeded!\n" May 20 22:05:15.986: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" May 20 22:05:15.986: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:05:16.238: INFO: rc: 1 May 20 22:05:16.238: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:17.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:05:17.578: INFO: rc: 1 May 20 22:05:17.579: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:18.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:05:18.787: INFO: rc: 1 May 20 22:05:18.787: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:19.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:05:19.529: INFO: rc: 1 May 20 22:05:19.529: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:20.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:05:20.893: INFO: rc: 1 May 20 22:05:20.893: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:21.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:05:21.474: INFO: rc: 1 May 20 22:05:21.474: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:22.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:05:22.486: INFO: rc: 1 May 20 22:05:22.486: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:23.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:05:23.491: INFO: rc: 1 May 20 22:05:23.491: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:24.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:05:24.785: INFO: rc: 1 May 20 22:05:24.785: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:25.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:05:25.740: INFO: rc: 1 May 20 22:05:25.740: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:26.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:05:26.770: INFO: rc: 1 May 20 22:05:26.770: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:27.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:05:27.664: INFO: rc: 1 May 20 22:05:27.664: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:28.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:05:28.765: INFO: rc: 1 May 20 22:05:28.765: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:29.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:05:29.528: INFO: rc: 1 May 20 22:05:29.528: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:30.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:05:30.568: INFO: rc: 1 May 20 22:05:30.568: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:31.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:05:31.487: INFO: rc: 1 May 20 22:05:31.487: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:32.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:05:32.511: INFO: rc: 1 May 20 22:05:32.511: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:33.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:05:33.552: INFO: rc: 1 May 20 22:05:33.552: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:34.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:05:34.989: INFO: rc: 1 May 20 22:05:34.990: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31569 + echo hostName nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:35.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:05:35.536: INFO: rc: 1 May 20 22:05:35.536: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:36.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:05:36.492: INFO: rc: 1 May 20 22:05:36.492: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo+ hostName nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:37.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:05:37.851: INFO: rc: 1 May 20 22:05:37.851: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31569 + echo hostName nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:38.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:05:38.554: INFO: rc: 1 May 20 22:05:38.554: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:39.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:05:39.506: INFO: rc: 1 May 20 22:05:39.506: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:40.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:05:40.482: INFO: rc: 1 May 20 22:05:40.482: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:41.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:05:41.932: INFO: rc: 1 May 20 22:05:41.932: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:42.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:05:42.952: INFO: rc: 1 May 20 22:05:42.952: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:43.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:05:43.478: INFO: rc: 1 May 20 22:05:43.478: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:44.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:05:44.608: INFO: rc: 1 May 20 22:05:44.608: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:45.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:05:45.516: INFO: rc: 1 May 20 22:05:45.516: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:46.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:05:46.632: INFO: rc: 1 May 20 22:05:46.632: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:47.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:05:47.551: INFO: rc: 1 May 20 22:05:47.551: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:48.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:05:48.504: INFO: rc: 1 May 20 22:05:48.504: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:49.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:05:49.503: INFO: rc: 1 May 20 22:05:49.503: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:50.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:05:50.480: INFO: rc: 1 May 20 22:05:50.480: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:51.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:05:51.613: INFO: rc: 1 May 20 22:05:51.613: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:52.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:05:52.508: INFO: rc: 1 May 20 22:05:52.508: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:53.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:05:53.587: INFO: rc: 1 May 20 22:05:53.587: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:54.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:05:54.469: INFO: rc: 1 May 20 22:05:54.469: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:55.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:05:55.492: INFO: rc: 1 May 20 22:05:55.492: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:56.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:05:56.517: INFO: rc: 1 May 20 22:05:56.517: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:57.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:05:57.488: INFO: rc: 1 May 20 22:05:57.488: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:58.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:05:58.524: INFO: rc: 1 May 20 22:05:58.525: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:59.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:05:59.494: INFO: rc: 1 May 20 22:05:59.494: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:00.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:06:00.575: INFO: rc: 1 May 20 22:06:00.575: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:01.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:06:01.480: INFO: rc: 1 May 20 22:06:01.480: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo+ hostName nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:02.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:06:02.480: INFO: rc: 1 May 20 22:06:02.481: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:03.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:06:03.480: INFO: rc: 1 May 20 22:06:03.480: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:04.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:06:04.491: INFO: rc: 1 May 20 22:06:04.491: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:05.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:06:05.500: INFO: rc: 1 May 20 22:06:05.501: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:06.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:06:06.495: INFO: rc: 1 May 20 22:06:06.495: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:07.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:06:07.513: INFO: rc: 1 May 20 22:06:07.513: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:08.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:06:08.479: INFO: rc: 1 May 20 22:06:08.479: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:09.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:06:09.487: INFO: rc: 1 May 20 22:06:09.487: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:10.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:06:10.483: INFO: rc: 1 May 20 22:06:10.483: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:11.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:06:11.473: INFO: rc: 1 May 20 22:06:11.473: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:12.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:06:12.477: INFO: rc: 1 May 20 22:06:12.477: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:13.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:06:13.474: INFO: rc: 1 May 20 22:06:13.474: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:14.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:06:14.478: INFO: rc: 1 May 20 22:06:14.478: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:15.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:06:16.446: INFO: rc: 1 May 20 22:06:16.446: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:17.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:06:17.486: INFO: rc: 1 May 20 22:06:17.487: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:18.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:06:18.657: INFO: rc: 1 May 20 22:06:18.657: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:19.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:06:19.568: INFO: rc: 1 May 20 22:06:19.568: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:20.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:06:21.186: INFO: rc: 1 May 20 22:06:21.186: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:21.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:06:21.542: INFO: rc: 1 May 20 22:06:21.542: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:22.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:06:22.488: INFO: rc: 1 May 20 22:06:22.488: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:23.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:06:23.486: INFO: rc: 1 May 20 22:06:23.486: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:24.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:06:24.496: INFO: rc: 1 May 20 22:06:24.496: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:25.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:06:25.487: INFO: rc: 1 May 20 22:06:25.487: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:26.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:06:26.528: INFO: rc: 1 May 20 22:06:26.528: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:27.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:06:27.471: INFO: rc: 1 May 20 22:06:27.471: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:28.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:06:28.477: INFO: rc: 1 May 20 22:06:28.477: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:29.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:06:29.478: INFO: rc: 1 May 20 22:06:29.478: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:30.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:06:30.495: INFO: rc: 1 May 20 22:06:30.495: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:31.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:06:31.502: INFO: rc: 1 May 20 22:06:31.502: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:32.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:06:32.520: INFO: rc: 1 May 20 22:06:32.520: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:33.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:06:33.623: INFO: rc: 1 May 20 22:06:33.623: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:34.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:06:34.481: INFO: rc: 1 May 20 22:06:34.481: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:35.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:06:35.503: INFO: rc: 1 May 20 22:06:35.503: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:36.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:06:36.516: INFO: rc: 1 May 20 22:06:36.516: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:37.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:06:37.515: INFO: rc: 1 May 20 22:06:37.516: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:38.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:06:38.609: INFO: rc: 1 May 20 22:06:38.609: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:39.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:06:39.713: INFO: rc: 1 May 20 22:06:39.713: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:40.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:06:40.843: INFO: rc: 1 May 20 22:06:40.843: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:41.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:06:41.481: INFO: rc: 1 May 20 22:06:41.481: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + + echonc hostName -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:42.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:06:42.506: INFO: rc: 1 May 20 22:06:42.507: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:43.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:06:43.626: INFO: rc: 1 May 20 22:06:43.626: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:44.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:06:44.647: INFO: rc: 1 May 20 22:06:44.647: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:45.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:06:45.664: INFO: rc: 1 May 20 22:06:45.664: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:46.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:06:46.704: INFO: rc: 1 May 20 22:06:46.704: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:47.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:06:47.493: INFO: rc: 1 May 20 22:06:47.493: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:48.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:06:48.527: INFO: rc: 1 May 20 22:06:48.528: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:49.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:06:49.465: INFO: rc: 1 May 20 22:06:49.465: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31569 + echo hostName nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:50.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:06:50.481: INFO: rc: 1 May 20 22:06:50.481: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:51.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:06:51.477: INFO: rc: 1 May 20 22:06:51.477: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:52.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:06:52.507: INFO: rc: 1 May 20 22:06:52.507: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:53.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:06:53.479: INFO: rc: 1 May 20 22:06:53.479: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:54.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:06:54.473: INFO: rc: 1 May 20 22:06:54.474: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:55.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:06:55.497: INFO: rc: 1 May 20 22:06:55.497: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:56.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:06:56.468: INFO: rc: 1 May 20 22:06:56.468: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:57.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:06:57.484: INFO: rc: 1 May 20 22:06:57.484: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31569 + echo hostName nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:58.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:06:58.481: INFO: rc: 1 May 20 22:06:58.481: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:59.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:06:59.945: INFO: rc: 1 May 20 22:06:59.945: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:07:00.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:07:00.523: INFO: rc: 1 May 20 22:07:00.523: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:07:01.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:07:01.538: INFO: rc: 1 May 20 22:07:01.538: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:07:02.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:07:02.495: INFO: rc: 1 May 20 22:07:02.495: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:07:03.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:07:03.481: INFO: rc: 1 May 20 22:07:03.481: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:07:04.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:07:04.493: INFO: rc: 1 May 20 22:07:04.494: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:07:05.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:07:05.481: INFO: rc: 1 May 20 22:07:05.481: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:07:06.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:07:06.489: INFO: rc: 1 May 20 22:07:06.489: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:07:07.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:07:07.495: INFO: rc: 1 May 20 22:07:07.495: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:07:08.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:07:08.515: INFO: rc: 1 May 20 22:07:08.515: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:07:09.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:07:09.518: INFO: rc: 1 May 20 22:07:09.518: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:07:10.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:07:10.516: INFO: rc: 1 May 20 22:07:10.516: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:07:11.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:07:11.495: INFO: rc: 1 May 20 22:07:11.496: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:07:12.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:07:12.485: INFO: rc: 1 May 20 22:07:12.485: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:07:13.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:07:13.492: INFO: rc: 1 May 20 22:07:13.492: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:07:14.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:07:14.478: INFO: rc: 1 May 20 22:07:14.478: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:07:15.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:07:16.388: INFO: rc: 1 May 20 22:07:16.388: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:07:16.388: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569' May 20 22:07:16.716: INFO: rc: 1 May 20 22:07:16.716: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4226 exec execpod-affinitysr8d6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31569: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31569 nc: connect to 10.10.190.207 port 31569 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:07:16.717: FAIL: Unexpected error: <*errors.errorString | 0xc0010d5de0>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31569 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31569 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithOptionalTransition(0xc0016d8dc0, 0x77b33d8, 0xc0012b54a0, 0xc0051cb900, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2576 +0x625 k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBService(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2535 k8s.io/kubernetes/test/e2e/network.glob..func24.25() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1829 +0xa5 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001a5a780) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc001a5a780) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc001a5a780, 0x70f99e8) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 May 20 22:07:16.718: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport in namespace services-4226, will wait for the garbage collector to delete the pods May 20 22:07:16.794: INFO: Deleting ReplicationController affinity-nodeport took: 3.470122ms May 20 22:07:16.895: INFO: Terminating ReplicationController affinity-nodeport pods took: 101.228889ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-4226". STEP: Found 27 events. May 20 22:07:27.015: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-8f96d: { } Scheduled: Successfully assigned services-4226/affinity-nodeport-8f96d to node1 May 20 22:07:27.015: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-rr9kn: { } Scheduled: Successfully assigned services-4226/affinity-nodeport-rr9kn to node2 May 20 22:07:27.015: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-zl64b: { } Scheduled: Successfully assigned services-4226/affinity-nodeport-zl64b to node1 May 20 22:07:27.015: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpod-affinitysr8d6: { } Scheduled: Successfully assigned services-4226/execpod-affinitysr8d6 to node1 May 20 22:07:27.015: INFO: At 2022-05-20 22:05:01 +0000 UTC - event for affinity-nodeport: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-zl64b May 20 22:07:27.015: INFO: At 2022-05-20 22:05:01 +0000 UTC - event for affinity-nodeport: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-8f96d May 20 22:07:27.015: INFO: At 2022-05-20 22:05:01 +0000 UTC - event for affinity-nodeport: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-rr9kn May 20 22:07:27.015: INFO: At 2022-05-20 22:05:03 +0000 UTC - event for affinity-nodeport-rr9kn: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" May 20 22:07:27.015: INFO: At 2022-05-20 22:05:03 +0000 UTC - event for affinity-nodeport-rr9kn: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 398.083838ms May 20 22:07:27.015: INFO: At 2022-05-20 22:05:04 +0000 UTC - event for affinity-nodeport-rr9kn: {kubelet node2} Created: Created container affinity-nodeport May 20 22:07:27.015: INFO: At 2022-05-20 22:05:04 +0000 UTC - event for affinity-nodeport-rr9kn: {kubelet node2} Started: Started container affinity-nodeport May 20 22:07:27.015: INFO: At 2022-05-20 22:05:04 +0000 UTC - event for affinity-nodeport-zl64b: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" May 20 22:07:27.015: INFO: At 2022-05-20 22:05:05 +0000 UTC - event for affinity-nodeport-8f96d: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" May 20 22:07:27.015: INFO: At 2022-05-20 22:05:05 +0000 UTC - event for affinity-nodeport-zl64b: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 419.446584ms May 20 22:07:27.015: INFO: At 2022-05-20 22:05:05 +0000 UTC - event for affinity-nodeport-zl64b: {kubelet node1} Started: Started container affinity-nodeport May 20 22:07:27.015: INFO: At 2022-05-20 22:05:05 +0000 UTC - event for affinity-nodeport-zl64b: {kubelet node1} Created: Created container affinity-nodeport May 20 22:07:27.015: INFO: At 2022-05-20 22:05:06 +0000 UTC - event for affinity-nodeport-8f96d: {kubelet node1} Started: Started container affinity-nodeport May 20 22:07:27.015: INFO: At 2022-05-20 22:05:06 +0000 UTC - event for affinity-nodeport-8f96d: {kubelet node1} Created: Created container affinity-nodeport May 20 22:07:27.015: INFO: At 2022-05-20 22:05:06 +0000 UTC - event for affinity-nodeport-8f96d: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 617.701036ms May 20 22:07:27.015: INFO: At 2022-05-20 22:05:10 +0000 UTC - event for execpod-affinitysr8d6: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" May 20 22:07:27.015: INFO: At 2022-05-20 22:05:11 +0000 UTC - event for execpod-affinitysr8d6: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 540.598438ms May 20 22:07:27.015: INFO: At 2022-05-20 22:05:11 +0000 UTC - event for execpod-affinitysr8d6: {kubelet node1} Created: Created container agnhost-container May 20 22:07:27.015: INFO: At 2022-05-20 22:05:12 +0000 UTC - event for execpod-affinitysr8d6: {kubelet node1} Started: Started container agnhost-container May 20 22:07:27.015: INFO: At 2022-05-20 22:07:16 +0000 UTC - event for affinity-nodeport-8f96d: {kubelet node1} Killing: Stopping container affinity-nodeport May 20 22:07:27.015: INFO: At 2022-05-20 22:07:16 +0000 UTC - event for affinity-nodeport-rr9kn: {kubelet node2} Killing: Stopping container affinity-nodeport May 20 22:07:27.015: INFO: At 2022-05-20 22:07:16 +0000 UTC - event for affinity-nodeport-zl64b: {kubelet node1} Killing: Stopping container affinity-nodeport May 20 22:07:27.015: INFO: At 2022-05-20 22:07:16 +0000 UTC - event for execpod-affinitysr8d6: {kubelet node1} Killing: Stopping container agnhost-container May 20 22:07:27.017: INFO: POD NODE PHASE GRACE CONDITIONS May 20 22:07:27.017: INFO: May 20 22:07:27.022: INFO: Logging node info for node master1 May 20 22:07:27.026: INFO: Node Info: &Node{ObjectMeta:{master1 b016dcf2-74b7-4456-916a-8ca363b9ccc3 42099 0 2022-05-20 20:01:28 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-05-20 20:01:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2022-05-20 20:01:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2022-05-20 20:04:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2022-05-20 20:09:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {nfd-master Update v1 2022-05-20 20:12:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-20 20:07:07 +0000 UTC,LastTransitionTime:2022-05-20 20:07:07 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-20 22:07:25 +0000 UTC,LastTransitionTime:2022-05-20 20:01:26 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-20 22:07:25 +0000 UTC,LastTransitionTime:2022-05-20 20:01:26 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-20 22:07:25 +0000 UTC,LastTransitionTime:2022-05-20 20:01:26 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-20 22:07:25 +0000 UTC,LastTransitionTime:2022-05-20 20:04:22 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e9847a94929d4465bdf672fd6e82b77d,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:a01e5bd5-a73c-4ab6-b80a-cab509b05bc6,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727687197,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:f65735add9b770eec74999948d1a43963106c14a89579d0158e1ec3a1bae070e tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 20 22:07:27.027: INFO: Logging kubelet events for node master1 May 20 22:07:27.029: INFO: Logging pods the kubelet thinks is on node master1 May 20 22:07:27.054: INFO: node-feature-discovery-controller-cff799f9f-nq7tc started at 2022-05-20 20:11:58 +0000 UTC (0+1 container statuses recorded) May 20 22:07:27.054: INFO: Container nfd-controller ready: true, restart count 0 May 20 22:07:27.054: INFO: node-exporter-4rvrg started at 2022-05-20 20:17:21 +0000 UTC (0+2 container statuses recorded) May 20 22:07:27.054: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 20 22:07:27.054: INFO: Container node-exporter ready: true, restart count 0 May 20 22:07:27.054: INFO: kube-scheduler-master1 started at 2022-05-20 20:20:27 +0000 UTC (0+1 container statuses recorded) May 20 22:07:27.054: INFO: Container kube-scheduler ready: true, restart count 1 May 20 22:07:27.054: INFO: kube-apiserver-master1 started at 2022-05-20 20:02:32 +0000 UTC (0+1 container statuses recorded) May 20 22:07:27.054: INFO: Container kube-apiserver ready: true, restart count 0 May 20 22:07:27.054: INFO: kube-controller-manager-master1 started at 2022-05-20 20:10:37 +0000 UTC (0+1 container statuses recorded) May 20 22:07:27.054: INFO: Container kube-controller-manager ready: true, restart count 3 May 20 22:07:27.054: INFO: kube-proxy-rgxh2 started at 2022-05-20 20:03:14 +0000 UTC (0+1 container statuses recorded) May 20 22:07:27.054: INFO: Container kube-proxy ready: true, restart count 2 May 20 22:07:27.054: INFO: kube-flannel-tzq8g started at 2022-05-20 20:04:10 +0000 UTC (1+1 container statuses recorded) May 20 22:07:27.054: INFO: Init container install-cni ready: true, restart count 2 May 20 22:07:27.054: INFO: Container kube-flannel ready: true, restart count 1 May 20 22:07:27.054: INFO: kube-multus-ds-amd64-k8cb6 started at 2022-05-20 20:04:18 +0000 UTC (0+1 container statuses recorded) May 20 22:07:27.054: INFO: Container kube-multus ready: true, restart count 1 May 20 22:07:27.054: INFO: container-registry-65d7c44b96-n94w5 started at 2022-05-20 20:08:47 +0000 UTC (0+2 container statuses recorded) May 20 22:07:27.054: INFO: Container docker-registry ready: true, restart count 0 May 20 22:07:27.054: INFO: Container nginx ready: true, restart count 0 May 20 22:07:27.054: INFO: prometheus-operator-585ccfb458-bl62n started at 2022-05-20 20:17:13 +0000 UTC (0+2 container statuses recorded) May 20 22:07:27.054: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 20 22:07:27.054: INFO: Container prometheus-operator ready: true, restart count 0 May 20 22:07:27.141: INFO: Latency metrics for node master1 May 20 22:07:27.141: INFO: Logging node info for node master2 May 20 22:07:27.144: INFO: Node Info: &Node{ObjectMeta:{master2 ddc04b08-e43a-4e18-a612-aa3bf7f8411e 42101 0 2022-05-20 20:01:56 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-05-20 20:01:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-05-20 20:04:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-05-20 20:04:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-05-20 20:14:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-20 20:07:03 +0000 UTC,LastTransitionTime:2022-05-20 20:07:03 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-20 22:07:25 +0000 UTC,LastTransitionTime:2022-05-20 20:01:56 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-20 22:07:25 +0000 UTC,LastTransitionTime:2022-05-20 20:01:56 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-20 22:07:25 +0000 UTC,LastTransitionTime:2022-05-20 20:01:56 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-20 22:07:25 +0000 UTC,LastTransitionTime:2022-05-20 20:04:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:63d829bfe81540169bcb84ee465e884a,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:fc4aead3-0f07-477a-9f91-3902c50ddf48,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727687197,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 20 22:07:27.144: INFO: Logging kubelet events for node master2 May 20 22:07:27.147: INFO: Logging pods the kubelet thinks is on node master2 May 20 22:07:27.161: INFO: kube-flannel-wj7hl started at 2022-05-20 20:04:10 +0000 UTC (1+1 container statuses recorded) May 20 22:07:27.161: INFO: Init container install-cni ready: true, restart count 2 May 20 22:07:27.161: INFO: Container kube-flannel ready: true, restart count 1 May 20 22:07:27.161: INFO: coredns-8474476ff8-tjnfw started at 2022-05-20 20:04:46 +0000 UTC (0+1 container statuses recorded) May 20 22:07:27.161: INFO: Container coredns ready: true, restart count 1 May 20 22:07:27.161: INFO: dns-autoscaler-7df78bfcfb-5qj9t started at 2022-05-20 20:04:48 +0000 UTC (0+1 container statuses recorded) May 20 22:07:27.161: INFO: Container autoscaler ready: true, restart count 1 May 20 22:07:27.161: INFO: node-exporter-jfg4p started at 2022-05-20 20:17:20 +0000 UTC (0+2 container statuses recorded) May 20 22:07:27.161: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 20 22:07:27.161: INFO: Container node-exporter ready: true, restart count 0 May 20 22:07:27.161: INFO: kube-apiserver-master2 started at 2022-05-20 20:02:34 +0000 UTC (0+1 container statuses recorded) May 20 22:07:27.161: INFO: Container kube-apiserver ready: true, restart count 0 May 20 22:07:27.162: INFO: kube-controller-manager-master2 started at 2022-05-20 20:10:36 +0000 UTC (0+1 container statuses recorded) May 20 22:07:27.162: INFO: Container kube-controller-manager ready: true, restart count 2 May 20 22:07:27.162: INFO: kube-proxy-wfzg2 started at 2022-05-20 20:03:14 +0000 UTC (0+1 container statuses recorded) May 20 22:07:27.162: INFO: Container kube-proxy ready: true, restart count 1 May 20 22:07:27.162: INFO: kube-scheduler-master2 started at 2022-05-20 20:02:34 +0000 UTC (0+1 container statuses recorded) May 20 22:07:27.162: INFO: Container kube-scheduler ready: true, restart count 3 May 20 22:07:27.162: INFO: kube-multus-ds-amd64-97fkc started at 2022-05-20 20:04:18 +0000 UTC (0+1 container statuses recorded) May 20 22:07:27.162: INFO: Container kube-multus ready: true, restart count 1 May 20 22:07:27.239: INFO: Latency metrics for node master2 May 20 22:07:27.239: INFO: Logging node info for node master3 May 20 22:07:27.241: INFO: Node Info: &Node{ObjectMeta:{master3 f42c1bd6-d828-4857-9180-56c73dcc370f 42104 0 2022-05-20 20:02:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-05-20 20:02:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-05-20 20:04:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-05-20 20:04:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-05-20 20:14:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-20 20:07:09 +0000 UTC,LastTransitionTime:2022-05-20 20:07:09 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-20 22:07:25 +0000 UTC,LastTransitionTime:2022-05-20 20:02:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-20 22:07:25 +0000 UTC,LastTransitionTime:2022-05-20 20:02:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-20 22:07:25 +0000 UTC,LastTransitionTime:2022-05-20 20:02:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-20 22:07:25 +0000 UTC,LastTransitionTime:2022-05-20 20:04:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6a2131d65a6f41c3b857ed7d5f7d9f9f,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:2fa6d1c6-058c-482a-97f3-d7e9e817b36a,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727687197,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 20 22:07:27.242: INFO: Logging kubelet events for node master3 May 20 22:07:27.243: INFO: Logging pods the kubelet thinks is on node master3 May 20 22:07:27.257: INFO: kube-controller-manager-master3 started at 2022-05-20 20:10:36 +0000 UTC (0+1 container statuses recorded) May 20 22:07:27.257: INFO: Container kube-controller-manager ready: true, restart count 1 May 20 22:07:27.257: INFO: kube-scheduler-master3 started at 2022-05-20 20:02:33 +0000 UTC (0+1 container statuses recorded) May 20 22:07:27.257: INFO: Container kube-scheduler ready: true, restart count 2 May 20 22:07:27.257: INFO: kube-proxy-rsqzq started at 2022-05-20 20:03:14 +0000 UTC (0+1 container statuses recorded) May 20 22:07:27.257: INFO: Container kube-proxy ready: true, restart count 2 May 20 22:07:27.257: INFO: kube-flannel-bwb5w started at 2022-05-20 20:04:10 +0000 UTC (1+1 container statuses recorded) May 20 22:07:27.257: INFO: Init container install-cni ready: true, restart count 0 May 20 22:07:27.257: INFO: Container kube-flannel ready: true, restart count 2 May 20 22:07:27.257: INFO: kube-apiserver-master3 started at 2022-05-20 20:02:05 +0000 UTC (0+1 container statuses recorded) May 20 22:07:27.257: INFO: Container kube-apiserver ready: true, restart count 0 May 20 22:07:27.257: INFO: kube-multus-ds-amd64-ch8bd started at 2022-05-20 20:04:18 +0000 UTC (0+1 container statuses recorded) May 20 22:07:27.257: INFO: Container kube-multus ready: true, restart count 1 May 20 22:07:27.257: INFO: coredns-8474476ff8-4szxh started at 2022-05-20 20:04:50 +0000 UTC (0+1 container statuses recorded) May 20 22:07:27.257: INFO: Container coredns ready: true, restart count 1 May 20 22:07:27.257: INFO: node-exporter-zgxkr started at 2022-05-20 20:17:20 +0000 UTC (0+2 container statuses recorded) May 20 22:07:27.257: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 20 22:07:27.257: INFO: Container node-exporter ready: true, restart count 0 May 20 22:07:27.343: INFO: Latency metrics for node master3 May 20 22:07:27.343: INFO: Logging node info for node node1 May 20 22:07:27.346: INFO: Node Info: &Node{ObjectMeta:{node1 65c381dd-b6f5-4e67-a327-7a45366d15af 42025 0 2022-05-20 20:03:10 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-20 20:03:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-05-20 20:03:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-05-20 20:04:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-05-20 20:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-05-20 20:15:36 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-05-20 20:15:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-20 20:07:03 +0000 UTC,LastTransitionTime:2022-05-20 20:07:03 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-20 22:07:20 +0000 UTC,LastTransitionTime:2022-05-20 20:03:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-20 22:07:20 +0000 UTC,LastTransitionTime:2022-05-20 20:03:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-20 22:07:20 +0000 UTC,LastTransitionTime:2022-05-20 20:03:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-20 22:07:20 +0000 UTC,LastTransitionTime:2022-05-20 20:04:16 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f2f0a31e38e446cda6cf4c679d8a2ef5,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:c988afd2-8149-4515-9a6f-832552c2ed2d,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003977757,},ContainerImage{Names:[localhost:30500/cmk@sha256:1b6fdb10d02a95904d28fbec7317b3044b913b4572405caf5a5b4f305481ce37 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727687197,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:bcea5fd975bec7f8eb179f896b3a007090d081bd13d974bdb01eedd94cdd88b1 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 20 22:07:27.347: INFO: Logging kubelet events for node node1 May 20 22:07:27.349: INFO: Logging pods the kubelet thinks is on node node1 May 20 22:07:27.366: INFO: pod3 started at 2022-05-20 22:07:11 +0000 UTC (0+1 container statuses recorded) May 20 22:07:27.366: INFO: Container agnhost ready: true, restart count 0 May 20 22:07:27.366: INFO: e2e-host-exec started at 2022-05-20 22:07:16 +0000 UTC (0+1 container statuses recorded) May 20 22:07:27.366: INFO: Container e2e-host-exec ready: true, restart count 0 May 20 22:07:27.367: INFO: node-feature-discovery-worker-rh55h started at 2022-05-20 20:11:58 +0000 UTC (0+1 container statuses recorded) May 20 22:07:27.367: INFO: Container nfd-worker ready: true, restart count 0 May 20 22:07:27.367: INFO: cmk-init-discover-node1-vkzkd started at 2022-05-20 20:15:33 +0000 UTC (0+3 container statuses recorded) May 20 22:07:27.367: INFO: Container discover ready: false, restart count 0 May 20 22:07:27.367: INFO: Container init ready: false, restart count 0 May 20 22:07:27.367: INFO: Container install ready: false, restart count 0 May 20 22:07:27.367: INFO: affinity-nodeport-transition-lvnqj started at 2022-05-20 22:05:17 +0000 UTC (0+1 container statuses recorded) May 20 22:07:27.367: INFO: Container affinity-nodeport-transition ready: true, restart count 0 May 20 22:07:27.367: INFO: pod2 started at 2022-05-20 22:07:07 +0000 UTC (0+1 container statuses recorded) May 20 22:07:27.367: INFO: Container agnhost ready: true, restart count 0 May 20 22:07:27.367: INFO: kube-flannel-2blt7 started at 2022-05-20 20:04:10 +0000 UTC (1+1 container statuses recorded) May 20 22:07:27.367: INFO: Init container install-cni ready: true, restart count 2 May 20 22:07:27.367: INFO: Container kube-flannel ready: true, restart count 3 May 20 22:07:27.367: INFO: test-pod started at 2022-05-20 22:06:36 +0000 UTC (0+1 container statuses recorded) May 20 22:07:27.367: INFO: Container webserver ready: true, restart count 0 May 20 22:07:27.367: INFO: pod1 started at 2022-05-20 22:07:03 +0000 UTC (0+1 container statuses recorded) May 20 22:07:27.367: INFO: Container agnhost ready: true, restart count 0 May 20 22:07:27.367: INFO: kube-proxy-v8kzq started at 2022-05-20 20:03:14 +0000 UTC (0+1 container statuses recorded) May 20 22:07:27.367: INFO: Container kube-proxy ready: true, restart count 2 May 20 22:07:27.367: INFO: cmk-c5x47 started at 2022-05-20 20:16:15 +0000 UTC (0+2 container statuses recorded) May 20 22:07:27.367: INFO: Container nodereport ready: true, restart count 0 May 20 22:07:27.367: INFO: Container reconcile ready: true, restart count 0 May 20 22:07:27.367: INFO: kube-multus-ds-amd64-krd6m started at 2022-05-20 20:04:18 +0000 UTC (0+1 container statuses recorded) May 20 22:07:27.367: INFO: Container kube-multus ready: true, restart count 1 May 20 22:07:27.367: INFO: kubernetes-dashboard-785dcbb76d-6c2f8 started at 2022-05-20 20:04:50 +0000 UTC (0+1 container statuses recorded) May 20 22:07:27.367: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 20 22:07:27.367: INFO: concurrent-27551407-dc2mt started at 2022-05-20 22:07:00 +0000 UTC (0+1 container statuses recorded) May 20 22:07:27.367: INFO: Container c ready: true, restart count 0 May 20 22:07:27.367: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-qn9gl started at 2022-05-20 20:13:08 +0000 UTC (0+1 container statuses recorded) May 20 22:07:27.367: INFO: Container kube-sriovdp ready: true, restart count 0 May 20 22:07:27.367: INFO: node-exporter-czwvh started at 2022-05-20 20:17:20 +0000 UTC (0+2 container statuses recorded) May 20 22:07:27.367: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 20 22:07:27.367: INFO: Container node-exporter ready: true, restart count 0 May 20 22:07:27.367: INFO: nginx-proxy-node1 started at 2022-05-20 20:06:57 +0000 UTC (0+1 container statuses recorded) May 20 22:07:27.367: INFO: Container nginx-proxy ready: true, restart count 2 May 20 22:07:27.367: INFO: prometheus-k8s-0 started at 2022-05-20 20:17:30 +0000 UTC (0+4 container statuses recorded) May 20 22:07:27.367: INFO: Container config-reloader ready: true, restart count 0 May 20 22:07:27.367: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 20 22:07:27.367: INFO: Container grafana ready: true, restart count 0 May 20 22:07:27.367: INFO: Container prometheus ready: true, restart count 1 May 20 22:07:27.367: INFO: collectd-875j8 started at 2022-05-20 20:21:17 +0000 UTC (0+3 container statuses recorded) May 20 22:07:27.367: INFO: Container collectd ready: true, restart count 0 May 20 22:07:27.367: INFO: Container collectd-exporter ready: true, restart count 0 May 20 22:07:27.367: INFO: Container rbac-proxy ready: true, restart count 0 May 20 22:07:27.665: INFO: Latency metrics for node node1 May 20 22:07:27.665: INFO: Logging node info for node node2 May 20 22:07:27.669: INFO: Node Info: &Node{ObjectMeta:{node2 a0e0a426-876d-4419-96e4-c6977ef3393c 42017 0 2022-05-20 20:03:09 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-20 20:03:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-05-20 20:03:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-05-20 20:04:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-05-20 20:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-05-20 20:15:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-05-20 20:16:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-20 20:07:03 +0000 UTC,LastTransitionTime:2022-05-20 20:07:03 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-20 22:07:18 +0000 UTC,LastTransitionTime:2022-05-20 20:03:09 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-20 22:07:18 +0000 UTC,LastTransitionTime:2022-05-20 20:03:09 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-20 22:07:18 +0000 UTC,LastTransitionTime:2022-05-20 20:03:09 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-20 22:07:18 +0000 UTC,LastTransitionTime:2022-05-20 20:07:08 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a6deb87c5d6d4ca89be50c8f447a0e3c,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:67af2183-25fe-4024-95ea-e80edf7c8695,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727687197,},ContainerImage{Names:[localhost:30500/cmk@sha256:1b6fdb10d02a95904d28fbec7317b3044b913b4572405caf5a5b4f305481ce37 localhost:30500/cmk:v1.5.1],SizeBytes:727687197,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:bcea5fd975bec7f8eb179f896b3a007090d081bd13d974bdb01eedd94cdd88b1 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:f65735add9b770eec74999948d1a43963106c14a89579d0158e1ec3a1bae070e localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 20 22:07:27.670: INFO: Logging kubelet events for node node2 May 20 22:07:27.672: INFO: Logging pods the kubelet thinks is on node node2 May 20 22:07:27.686: INFO: forbid-27551406-k5fwc started at 2022-05-20 22:06:00 +0000 UTC (0+1 container statuses recorded) May 20 22:07:27.686: INFO: Container c ready: true, restart count 0 May 20 22:07:27.686: INFO: foo-74lz7 started at 2022-05-20 22:06:50 +0000 UTC (0+1 container statuses recorded) May 20 22:07:27.686: INFO: Container c ready: true, restart count 0 May 20 22:07:27.686: INFO: var-expansion-94e74d38-84f2-4c02-8591-cd66058ba4c0 started at 2022-05-20 22:07:25 +0000 UTC (0+1 container statuses recorded) May 20 22:07:27.686: INFO: Container dapi-container ready: false, restart count 0 May 20 22:07:27.686: INFO: cmk-webhook-6c9d5f8578-5kbbc started at 2022-05-20 20:16:16 +0000 UTC (0+1 container statuses recorded) May 20 22:07:27.686: INFO: Container cmk-webhook ready: true, restart count 0 May 20 22:07:27.686: INFO: foo-q8q6l started at 2022-05-20 22:06:50 +0000 UTC (0+1 container statuses recorded) May 20 22:07:27.686: INFO: Container c ready: true, restart count 0 May 20 22:07:27.686: INFO: affinity-nodeport-transition-mnvzn started at 2022-05-20 22:05:17 +0000 UTC (0+1 container statuses recorded) May 20 22:07:27.686: INFO: Container affinity-nodeport-transition ready: true, restart count 0 May 20 22:07:27.686: INFO: affinity-nodeport-transition-cvbv6 started at 2022-05-20 22:05:17 +0000 UTC (0+1 container statuses recorded) May 20 22:07:27.686: INFO: Container affinity-nodeport-transition ready: true, restart count 0 May 20 22:07:27.686: INFO: kube-multus-ds-amd64-p22zp started at 2022-05-20 20:04:18 +0000 UTC (0+1 container statuses recorded) May 20 22:07:27.686: INFO: Container kube-multus ready: true, restart count 1 May 20 22:07:27.686: INFO: kubernetes-metrics-scraper-5558854cb-66r9g started at 2022-05-20 20:04:50 +0000 UTC (0+1 container statuses recorded) May 20 22:07:27.686: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 20 22:07:27.686: INFO: tas-telemetry-aware-scheduling-84ff454dfb-ddzzd started at 2022-05-20 20:20:26 +0000 UTC (0+1 container statuses recorded) May 20 22:07:27.686: INFO: Container tas-extender ready: true, restart count 0 May 20 22:07:27.686: INFO: cmk-init-discover-node2-b7gw4 started at 2022-05-20 20:15:53 +0000 UTC (0+3 container statuses recorded) May 20 22:07:27.686: INFO: Container discover ready: false, restart count 0 May 20 22:07:27.686: INFO: Container init ready: false, restart count 0 May 20 22:07:27.686: INFO: Container install ready: false, restart count 0 May 20 22:07:27.686: INFO: collectd-h4pzk started at 2022-05-20 20:21:17 +0000 UTC (0+3 container statuses recorded) May 20 22:07:27.686: INFO: Container collectd ready: true, restart count 0 May 20 22:07:27.686: INFO: Container collectd-exporter ready: true, restart count 0 May 20 22:07:27.686: INFO: Container rbac-proxy ready: true, restart count 0 May 20 22:07:27.686: INFO: busybox-1d29b80b-166a-4436-a3e3-d054a65734ae started at 2022-05-20 22:07:21 +0000 UTC (0+1 container statuses recorded) May 20 22:07:27.686: INFO: Container busybox ready: true, restart count 0 May 20 22:07:27.686: INFO: nginx-proxy-node2 started at 2022-05-20 20:03:09 +0000 UTC (0+1 container statuses recorded) May 20 22:07:27.686: INFO: Container nginx-proxy ready: true, restart count 2 May 20 22:07:27.686: INFO: kube-proxy-rg2fp started at 2022-05-20 20:03:14 +0000 UTC (0+1 container statuses recorded) May 20 22:07:27.686: INFO: Container kube-proxy ready: true, restart count 2 May 20 22:07:27.686: INFO: kube-flannel-jpmpd started at 2022-05-20 20:04:10 +0000 UTC (1+1 container statuses recorded) May 20 22:07:27.686: INFO: Init container install-cni ready: true, restart count 1 May 20 22:07:27.686: INFO: Container kube-flannel ready: true, restart count 2 May 20 22:07:27.686: INFO: node-feature-discovery-worker-nphk9 started at 2022-05-20 20:11:58 +0000 UTC (0+1 container statuses recorded) May 20 22:07:27.686: INFO: Container nfd-worker ready: true, restart count 0 May 20 22:07:27.686: INFO: execpod-affinityrptgl started at 2022-05-20 22:05:29 +0000 UTC (0+1 container statuses recorded) May 20 22:07:27.686: INFO: Container agnhost-container ready: true, restart count 0 May 20 22:07:27.686: INFO: pod-configmaps-5abb2c75-6d1e-42c6-8328-638631bc767d started at 2022-05-20 22:07:23 +0000 UTC (0+1 container statuses recorded) May 20 22:07:27.686: INFO: Container agnhost-container ready: false, restart count 0 May 20 22:07:27.686: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-wl7nk started at 2022-05-20 20:13:08 +0000 UTC (0+1 container statuses recorded) May 20 22:07:27.686: INFO: Container kube-sriovdp ready: true, restart count 0 May 20 22:07:27.686: INFO: cmk-9hxtl started at 2022-05-20 20:16:16 +0000 UTC (0+2 container statuses recorded) May 20 22:07:27.686: INFO: Container nodereport ready: true, restart count 0 May 20 22:07:27.686: INFO: Container reconcile ready: true, restart count 0 May 20 22:07:27.686: INFO: node-exporter-vm24n started at 2022-05-20 20:17:20 +0000 UTC (0+2 container statuses recorded) May 20 22:07:27.686: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 20 22:07:27.686: INFO: Container node-exporter ready: true, restart count 0 May 20 22:07:28.521: INFO: Latency metrics for node node2 May 20 22:07:28.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4226" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [146.644 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity work for NodePort service [LinuxOnly] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 20 22:07:16.717: Unexpected error: <*errors.errorString | 0xc0010d5de0>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31569 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31569 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2576 ------------------------------ {"msg":"FAILED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":12,"skipped":286,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:06:50.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-1157, will wait for the garbage collector to delete the pods May 20 22:06:54.212: INFO: Deleting Job.batch foo took: 4.637705ms May 20 22:06:54.313: INFO: Terminating Job.batch foo pods took: 101.183662ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:07:36.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1157" for this suite. • [SLOW TEST:46.808 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":-1,"completed":20,"skipped":367,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:07:27.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service endpoint-test2 in namespace services-5240 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5240 to expose endpoints map[] May 20 22:07:28.025: INFO: Failed go get Endpoints object: endpoints "endpoint-test2" not found May 20 22:07:29.032: INFO: successfully validated that service endpoint-test2 in namespace services-5240 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-5240 May 20 22:07:29.045: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) May 20 22:07:31.048: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) May 20 22:07:33.048: INFO: The status of Pod pod1 is Running (Ready = true) STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5240 to expose endpoints map[pod1:[80]] May 20 22:07:33.059: INFO: successfully validated that service endpoint-test2 in namespace services-5240 exposes endpoints map[pod1:[80]] STEP: Creating pod pod2 in namespace services-5240 May 20 22:07:33.073: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) May 20 22:07:35.078: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) May 20 22:07:37.077: INFO: The status of Pod pod2 is Running (Ready = true) STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5240 to expose endpoints map[pod1:[80] pod2:[80]] May 20 22:07:37.091: INFO: successfully validated that service endpoint-test2 in namespace services-5240 exposes endpoints map[pod1:[80] pod2:[80]] STEP: Deleting pod pod1 in namespace services-5240 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5240 to expose endpoints map[pod2:[80]] May 20 22:07:37.105: INFO: successfully validated that service endpoint-test2 in namespace services-5240 exposes endpoints map[pod2:[80]] STEP: Deleting pod pod2 in namespace services-5240 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5240 to expose endpoints map[] May 20 22:07:37.117: INFO: successfully validated that service endpoint-test2 in namespace services-5240 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:07:37.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5240" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:9.141 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":-1,"completed":9,"skipped":114,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:07:25.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 20 22:07:29.202: INFO: Deleting pod "var-expansion-94e74d38-84f2-4c02-8591-cd66058ba4c0" in namespace "var-expansion-1734" May 20 22:07:29.206: INFO: Wait up to 5m0s for pod "var-expansion-94e74d38-84f2-4c02-8591-cd66058ba4c0" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:07:37.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1734" for this suite. • [SLOW TEST:12.061 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should fail substituting values in a volume subpath with backticks [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","total":-1,"completed":32,"skipped":571,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:07:36.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin May 20 22:07:36.979: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3b986de9-63b5-4b4a-9d99-11a00b2d2367" in namespace "downward-api-136" to be "Succeeded or Failed" May 20 22:07:36.982: INFO: Pod "downwardapi-volume-3b986de9-63b5-4b4a-9d99-11a00b2d2367": Phase="Pending", Reason="", readiness=false. Elapsed: 2.439378ms May 20 22:07:38.985: INFO: Pod "downwardapi-volume-3b986de9-63b5-4b4a-9d99-11a00b2d2367": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005992203s May 20 22:07:40.989: INFO: Pod "downwardapi-volume-3b986de9-63b5-4b4a-9d99-11a00b2d2367": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010266843s STEP: Saw pod success May 20 22:07:40.989: INFO: Pod "downwardapi-volume-3b986de9-63b5-4b4a-9d99-11a00b2d2367" satisfied condition "Succeeded or Failed" May 20 22:07:40.992: INFO: Trying to get logs from node node1 pod downwardapi-volume-3b986de9-63b5-4b4a-9d99-11a00b2d2367 container client-container: STEP: delete the pod May 20 22:07:41.005: INFO: Waiting for pod downwardapi-volume-3b986de9-63b5-4b4a-9d99-11a00b2d2367 to disappear May 20 22:07:41.008: INFO: Pod downwardapi-volume-3b986de9-63b5-4b4a-9d99-11a00b2d2367 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:07:41.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-136" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":372,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:07:28.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 20 22:07:29.057: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 20 22:07:31.064: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681249, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681249, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681249, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681249, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 20 22:07:33.068: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681249, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681249, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681249, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681249, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 20 22:07:36.078: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 20 22:07:36.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3338-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:07:44.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2061" for this suite. STEP: Destroying namespace "webhook-2061-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.576 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":13,"skipped":349,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:07:41.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-a7cadf33-ee82-4d2d-b515-e70e3974ffeb STEP: Creating a pod to test consume configMaps May 20 22:07:41.069: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2a9c897d-b9d6-4db9-854c-f6b1d3d89e67" in namespace "projected-33" to be "Succeeded or Failed" May 20 22:07:41.071: INFO: Pod "pod-projected-configmaps-2a9c897d-b9d6-4db9-854c-f6b1d3d89e67": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010228ms May 20 22:07:43.075: INFO: Pod "pod-projected-configmaps-2a9c897d-b9d6-4db9-854c-f6b1d3d89e67": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006427163s May 20 22:07:45.080: INFO: Pod "pod-projected-configmaps-2a9c897d-b9d6-4db9-854c-f6b1d3d89e67": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011013629s May 20 22:07:47.083: INFO: Pod "pod-projected-configmaps-2a9c897d-b9d6-4db9-854c-f6b1d3d89e67": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01396487s May 20 22:07:49.088: INFO: Pod "pod-projected-configmaps-2a9c897d-b9d6-4db9-854c-f6b1d3d89e67": Phase="Pending", Reason="", readiness=false. Elapsed: 8.018988318s May 20 22:07:51.091: INFO: Pod "pod-projected-configmaps-2a9c897d-b9d6-4db9-854c-f6b1d3d89e67": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.022794161s STEP: Saw pod success May 20 22:07:51.092: INFO: Pod "pod-projected-configmaps-2a9c897d-b9d6-4db9-854c-f6b1d3d89e67" satisfied condition "Succeeded or Failed" May 20 22:07:51.094: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-2a9c897d-b9d6-4db9-854c-f6b1d3d89e67 container agnhost-container: STEP: delete the pod May 20 22:07:51.111: INFO: Waiting for pod pod-projected-configmaps-2a9c897d-b9d6-4db9-854c-f6b1d3d89e67 to disappear May 20 22:07:51.113: INFO: Pod pod-projected-configmaps-2a9c897d-b9d6-4db9-854c-f6b1d3d89e67 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:07:51.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-33" for this suite. • [SLOW TEST:10.088 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:07:44.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 20 22:07:52.346: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:07:52.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9922" for this suite. • [SLOW TEST:8.092 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":363,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSS ------------------------------ [BeforeEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:07:52.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-projected-all-test-volume-22a30195-f7c5-425b-b894-2d0afbf62fa1 STEP: Creating secret with name secret-projected-all-test-volume-282abf38-58a8-456a-bff8-e8eae654c78d STEP: Creating a pod to test Check all projections for projected volume plugin May 20 22:07:52.419: INFO: Waiting up to 5m0s for pod "projected-volume-c6191750-1ddb-4855-9053-81a011835aea" in namespace "projected-56" to be "Succeeded or Failed" May 20 22:07:52.423: INFO: Pod "projected-volume-c6191750-1ddb-4855-9053-81a011835aea": Phase="Pending", Reason="", readiness=false. Elapsed: 4.212133ms May 20 22:07:54.427: INFO: Pod "projected-volume-c6191750-1ddb-4855-9053-81a011835aea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008019003s May 20 22:07:56.431: INFO: Pod "projected-volume-c6191750-1ddb-4855-9053-81a011835aea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011691216s STEP: Saw pod success May 20 22:07:56.431: INFO: Pod "projected-volume-c6191750-1ddb-4855-9053-81a011835aea" satisfied condition "Succeeded or Failed" May 20 22:07:56.433: INFO: Trying to get logs from node node2 pod projected-volume-c6191750-1ddb-4855-9053-81a011835aea container projected-all-volume-test: STEP: delete the pod May 20 22:07:56.452: INFO: Waiting for pod projected-volume-c6191750-1ddb-4855-9053-81a011835aea to disappear May 20 22:07:56.455: INFO: Pod projected-volume-c6191750-1ddb-4855-9053-81a011835aea no longer exists [AfterEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:07:56.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-56" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":367,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:05:17.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-9879 STEP: creating service affinity-nodeport-transition in namespace services-9879 STEP: creating replication controller affinity-nodeport-transition in namespace services-9879 I0520 22:05:17.569685 36 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-9879, replica count: 3 I0520 22:05:20.620851 36 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0520 22:05:23.622022 36 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0520 22:05:26.622284 36 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0520 22:05:29.622893 36 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 20 22:05:29.631: INFO: Creating new exec pod May 20 22:05:38.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' May 20 22:05:39.531: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" May 20 22:05:39.531: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" May 20 22:05:39.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.63.239 80' May 20 22:05:39.821: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.63.239 80\nConnection to 10.233.63.239 80 port [tcp/http] succeeded!\n" May 20 22:05:39.821: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" May 20 22:05:39.822: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:05:40.223: INFO: rc: 1 May 20 22:05:40.223: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:41.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:05:41.637: INFO: rc: 1 May 20 22:05:41.637: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:42.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:05:42.458: INFO: rc: 1 May 20 22:05:42.459: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:43.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:05:43.461: INFO: rc: 1 May 20 22:05:43.461: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:44.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:05:44.606: INFO: rc: 1 May 20 22:05:44.606: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:45.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:05:45.611: INFO: rc: 1 May 20 22:05:45.612: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:46.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:05:46.715: INFO: rc: 1 May 20 22:05:46.715: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:47.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:05:47.690: INFO: rc: 1 May 20 22:05:47.690: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:48.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:05:48.463: INFO: rc: 1 May 20 22:05:48.463: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:49.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:05:49.652: INFO: rc: 1 May 20 22:05:49.652: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:50.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:05:50.487: INFO: rc: 1 May 20 22:05:50.488: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:51.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:05:51.476: INFO: rc: 1 May 20 22:05:51.476: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:52.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:05:52.503: INFO: rc: 1 May 20 22:05:52.503: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:53.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:05:53.460: INFO: rc: 1 May 20 22:05:53.460: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:54.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:05:54.578: INFO: rc: 1 May 20 22:05:54.578: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:55.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:05:55.555: INFO: rc: 1 May 20 22:05:55.555: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:56.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:05:56.540: INFO: rc: 1 May 20 22:05:56.540: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:57.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:05:57.522: INFO: rc: 1 May 20 22:05:57.522: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:58.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:05:58.488: INFO: rc: 1 May 20 22:05:58.488: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:05:59.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:05:59.485: INFO: rc: 1 May 20 22:05:59.485: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:00.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:06:00.476: INFO: rc: 1 May 20 22:06:00.477: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:01.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:06:01.911: INFO: rc: 1 May 20 22:06:01.911: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:02.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:06:02.667: INFO: rc: 1 May 20 22:06:02.667: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:03.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:06:03.886: INFO: rc: 1 May 20 22:06:03.886: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:04.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:06:04.490: INFO: rc: 1 May 20 22:06:04.490: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:05.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:06:05.618: INFO: rc: 1 May 20 22:06:05.618: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:06.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:06:06.631: INFO: rc: 1 May 20 22:06:06.631: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:07.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:06:07.664: INFO: rc: 1 May 20 22:06:07.664: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:08.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:06:08.619: INFO: rc: 1 May 20 22:06:08.619: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:09.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:06:09.546: INFO: rc: 1 May 20 22:06:09.546: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:10.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:06:10.667: INFO: rc: 1 May 20 22:06:10.667: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:11.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:06:11.476: INFO: rc: 1 May 20 22:06:11.476: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:12.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:06:12.518: INFO: rc: 1 May 20 22:06:12.518: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:13.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:06:13.476: INFO: rc: 1 May 20 22:06:13.476: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207+ 32556 echo hostName nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:14.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:06:14.492: INFO: rc: 1 May 20 22:06:14.492: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:15.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:06:15.493: INFO: rc: 1 May 20 22:06:15.493: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:16.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:06:16.478: INFO: rc: 1 May 20 22:06:16.478: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:17.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:06:17.493: INFO: rc: 1 May 20 22:06:17.493: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:18.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:06:18.502: INFO: rc: 1 May 20 22:06:18.502: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:19.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:06:19.743: INFO: rc: 1 May 20 22:06:19.743: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:20.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:06:20.489: INFO: rc: 1 May 20 22:06:20.489: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:21.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:06:21.626: INFO: rc: 1 May 20 22:06:21.626: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:22.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:06:22.485: INFO: rc: 1 May 20 22:06:22.485: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:23.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:06:23.696: INFO: rc: 1 May 20 22:06:23.696: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:24.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:06:24.563: INFO: rc: 1 May 20 22:06:24.563: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:25.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:06:25.478: INFO: rc: 1 May 20 22:06:25.478: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:26.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:06:26.460: INFO: rc: 1 May 20 22:06:26.461: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:27.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:06:27.640: INFO: rc: 1 May 20 22:06:27.640: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo+ hostName nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:28.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:06:28.647: INFO: rc: 1 May 20 22:06:28.647: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:29.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:06:29.557: INFO: rc: 1 May 20 22:06:29.557: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:30.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:06:30.472: INFO: rc: 1 May 20 22:06:30.472: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:31.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:06:31.489: INFO: rc: 1 May 20 22:06:31.489: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:32.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:06:32.457: INFO: rc: 1 May 20 22:06:32.457: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:33.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:06:33.527: INFO: rc: 1 May 20 22:06:33.527: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:34.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:06:34.473: INFO: rc: 1 May 20 22:06:34.473: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:35.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:06:35.541: INFO: rc: 1 May 20 22:06:35.541: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:36.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:06:36.494: INFO: rc: 1 May 20 22:06:36.494: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32556 + echo hostName nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:37.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:06:37.496: INFO: rc: 1 May 20 22:06:37.496: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:38.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:06:38.695: INFO: rc: 1 May 20 22:06:38.695: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:39.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:06:39.879: INFO: rc: 1 May 20 22:06:39.879: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:40.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:06:40.577: INFO: rc: 1 May 20 22:06:40.577: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:41.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:06:41.491: INFO: rc: 1 May 20 22:06:41.491: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:42.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:06:42.576: INFO: rc: 1 May 20 22:06:42.576: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:43.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:06:43.512: INFO: rc: 1 May 20 22:06:43.512: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:44.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:06:44.559: INFO: rc: 1 May 20 22:06:44.560: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:45.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:06:45.497: INFO: rc: 1 May 20 22:06:45.497: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:46.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:06:46.485: INFO: rc: 1 May 20 22:06:46.485: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:47.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:06:47.469: INFO: rc: 1 May 20 22:06:47.469: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:48.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:06:48.506: INFO: rc: 1 May 20 22:06:48.506: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:49.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:06:49.494: INFO: rc: 1 May 20 22:06:49.494: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:50.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:06:50.480: INFO: rc: 1 May 20 22:06:50.480: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:51.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:06:51.924: INFO: rc: 1 May 20 22:06:51.924: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:52.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:06:52.564: INFO: rc: 1 May 20 22:06:52.565: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:53.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:06:53.462: INFO: rc: 1 May 20 22:06:53.462: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:54.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:06:54.481: INFO: rc: 1 May 20 22:06:54.481: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:55.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:06:55.816: INFO: rc: 1 May 20 22:06:55.816: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:56.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:06:56.476: INFO: rc: 1 May 20 22:06:56.476: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:57.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:06:57.564: INFO: rc: 1 May 20 22:06:57.564: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:58.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:06:58.464: INFO: rc: 1 May 20 22:06:58.464: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:06:59.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:06:59.499: INFO: rc: 1 May 20 22:06:59.500: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:07:00.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:07:00.957: INFO: rc: 1 May 20 22:07:00.957: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:07:01.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:07:01.586: INFO: rc: 1 May 20 22:07:01.586: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:07:02.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:07:02.492: INFO: rc: 1 May 20 22:07:02.492: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:07:03.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:07:03.512: INFO: rc: 1 May 20 22:07:03.513: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:07:04.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:07:04.963: INFO: rc: 1 May 20 22:07:04.963: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + + nc -vecho -t -w 2 10.10.190.207 32556 hostName nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:07:05.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:07:05.499: INFO: rc: 1 May 20 22:07:05.499: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:07:06.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:07:06.483: INFO: rc: 1 May 20 22:07:06.483: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:07:07.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:07:07.480: INFO: rc: 1 May 20 22:07:07.480: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:07:08.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:07:08.475: INFO: rc: 1 May 20 22:07:08.475: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:07:09.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:07:09.491: INFO: rc: 1 May 20 22:07:09.491: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName+ nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:07:10.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:07:10.474: INFO: rc: 1 May 20 22:07:10.474: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:07:11.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:07:11.476: INFO: rc: 1 May 20 22:07:11.477: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:07:12.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:07:12.476: INFO: rc: 1 May 20 22:07:12.476: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:07:13.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:07:13.470: INFO: rc: 1 May 20 22:07:13.470: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:07:14.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:07:14.476: INFO: rc: 1 May 20 22:07:14.476: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:07:15.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:07:15.509: INFO: rc: 1 May 20 22:07:15.509: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:07:16.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:07:16.475: INFO: rc: 1 May 20 22:07:16.475: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:07:17.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:07:17.525: INFO: rc: 1 May 20 22:07:17.525: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:07:18.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:07:18.479: INFO: rc: 1 May 20 22:07:18.479: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:07:19.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:07:19.505: INFO: rc: 1 May 20 22:07:19.505: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:07:20.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:07:20.485: INFO: rc: 1 May 20 22:07:20.485: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:07:21.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:07:21.564: INFO: rc: 1 May 20 22:07:21.564: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:07:22.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:07:22.524: INFO: rc: 1 May 20 22:07:22.524: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:07:23.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:07:23.451: INFO: rc: 1 May 20 22:07:23.451: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:07:24.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:07:24.621: INFO: rc: 1 May 20 22:07:24.621: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:07:25.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:07:25.495: INFO: rc: 1 May 20 22:07:25.495: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:07:26.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:07:26.543: INFO: rc: 1 May 20 22:07:26.543: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:07:27.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:07:27.619: INFO: rc: 1 May 20 22:07:27.619: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:07:28.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:07:28.561: INFO: rc: 1 May 20 22:07:28.561: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:07:29.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:07:29.461: INFO: rc: 1 May 20 22:07:29.462: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:07:30.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:07:30.822: INFO: rc: 1 May 20 22:07:30.822: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:07:31.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:07:31.475: INFO: rc: 1 May 20 22:07:31.475: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:07:32.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:07:32.461: INFO: rc: 1 May 20 22:07:32.461: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:07:33.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:07:33.554: INFO: rc: 1 May 20 22:07:33.554: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:07:34.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:07:34.600: INFO: rc: 1 May 20 22:07:34.601: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:07:35.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:07:35.462: INFO: rc: 1 May 20 22:07:35.462: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:07:36.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:07:36.489: INFO: rc: 1 May 20 22:07:36.489: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:07:37.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:07:37.491: INFO: rc: 1 May 20 22:07:37.491: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:07:38.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:07:38.557: INFO: rc: 1 May 20 22:07:38.557: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:07:39.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:07:39.643: INFO: rc: 1 May 20 22:07:39.643: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32556 + echo hostName nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:07:40.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:07:40.819: INFO: rc: 1 May 20 22:07:40.819: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:07:40.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556' May 20 22:07:41.195: INFO: rc: 1 May 20 22:07:41.195: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9879 exec execpod-affinityrptgl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32556: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32556 nc: connect to 10.10.190.207 port 32556 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:07:41.196: FAIL: Unexpected error: <*errors.errorString | 0xc003e86720>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32556 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32556 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithOptionalTransition(0xc001792160, 0x77b33d8, 0xc00272cdc0, 0xc001520a00, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2576 +0x625 k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithTransition(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2531 k8s.io/kubernetes/test/e2e/network.glob..func24.27() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1862 +0xa5 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001740600) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc001740600) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc001740600, 0x70f99e8) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 May 20 22:07:41.197: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-9879, will wait for the garbage collector to delete the pods May 20 22:07:41.259: INFO: Deleting ReplicationController affinity-nodeport-transition took: 4.003204ms May 20 22:07:41.360: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 101.141933ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-9879". STEP: Found 27 events. May 20 22:07:55.879: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-transition-cvbv6: { } Scheduled: Successfully assigned services-9879/affinity-nodeport-transition-cvbv6 to node2 May 20 22:07:55.879: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-transition-lvnqj: { } Scheduled: Successfully assigned services-9879/affinity-nodeport-transition-lvnqj to node1 May 20 22:07:55.879: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-transition-mnvzn: { } Scheduled: Successfully assigned services-9879/affinity-nodeport-transition-mnvzn to node2 May 20 22:07:55.879: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpod-affinityrptgl: { } Scheduled: Successfully assigned services-9879/execpod-affinityrptgl to node2 May 20 22:07:55.879: INFO: At 2022-05-20 22:05:17 +0000 UTC - event for affinity-nodeport-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-transition-cvbv6 May 20 22:07:55.879: INFO: At 2022-05-20 22:05:17 +0000 UTC - event for affinity-nodeport-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-transition-mnvzn May 20 22:07:55.879: INFO: At 2022-05-20 22:05:17 +0000 UTC - event for affinity-nodeport-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-transition-lvnqj May 20 22:07:55.879: INFO: At 2022-05-20 22:05:20 +0000 UTC - event for affinity-nodeport-transition-lvnqj: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" May 20 22:07:55.879: INFO: At 2022-05-20 22:05:20 +0000 UTC - event for affinity-nodeport-transition-lvnqj: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 698.13975ms May 20 22:07:55.879: INFO: At 2022-05-20 22:05:21 +0000 UTC - event for affinity-nodeport-transition-cvbv6: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" May 20 22:07:55.879: INFO: At 2022-05-20 22:05:21 +0000 UTC - event for affinity-nodeport-transition-lvnqj: {kubelet node1} Created: Created container affinity-nodeport-transition May 20 22:07:55.879: INFO: At 2022-05-20 22:05:21 +0000 UTC - event for affinity-nodeport-transition-lvnqj: {kubelet node1} Started: Started container affinity-nodeport-transition May 20 22:07:55.879: INFO: At 2022-05-20 22:05:21 +0000 UTC - event for affinity-nodeport-transition-mnvzn: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 519.124752ms May 20 22:07:55.879: INFO: At 2022-05-20 22:05:21 +0000 UTC - event for affinity-nodeport-transition-mnvzn: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" May 20 22:07:55.879: INFO: At 2022-05-20 22:05:22 +0000 UTC - event for affinity-nodeport-transition-cvbv6: {kubelet node2} Created: Created container affinity-nodeport-transition May 20 22:07:55.879: INFO: At 2022-05-20 22:05:22 +0000 UTC - event for affinity-nodeport-transition-cvbv6: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 986.37899ms May 20 22:07:55.879: INFO: At 2022-05-20 22:05:22 +0000 UTC - event for affinity-nodeport-transition-cvbv6: {kubelet node2} Started: Started container affinity-nodeport-transition May 20 22:07:55.879: INFO: At 2022-05-20 22:05:22 +0000 UTC - event for affinity-nodeport-transition-mnvzn: {kubelet node2} Started: Started container affinity-nodeport-transition May 20 22:07:55.879: INFO: At 2022-05-20 22:05:22 +0000 UTC - event for affinity-nodeport-transition-mnvzn: {kubelet node2} Created: Created container affinity-nodeport-transition May 20 22:07:55.879: INFO: At 2022-05-20 22:05:31 +0000 UTC - event for execpod-affinityrptgl: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" May 20 22:07:55.879: INFO: At 2022-05-20 22:05:32 +0000 UTC - event for execpod-affinityrptgl: {kubelet node2} Started: Started container agnhost-container May 20 22:07:55.879: INFO: At 2022-05-20 22:05:32 +0000 UTC - event for execpod-affinityrptgl: {kubelet node2} Created: Created container agnhost-container May 20 22:07:55.879: INFO: At 2022-05-20 22:05:32 +0000 UTC - event for execpod-affinityrptgl: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 347.730557ms May 20 22:07:55.879: INFO: At 2022-05-20 22:07:41 +0000 UTC - event for affinity-nodeport-transition-cvbv6: {kubelet node2} Killing: Stopping container affinity-nodeport-transition May 20 22:07:55.879: INFO: At 2022-05-20 22:07:41 +0000 UTC - event for affinity-nodeport-transition-lvnqj: {kubelet node1} Killing: Stopping container affinity-nodeport-transition May 20 22:07:55.879: INFO: At 2022-05-20 22:07:41 +0000 UTC - event for affinity-nodeport-transition-mnvzn: {kubelet node2} Killing: Stopping container affinity-nodeport-transition May 20 22:07:55.879: INFO: At 2022-05-20 22:07:41 +0000 UTC - event for execpod-affinityrptgl: {kubelet node2} Killing: Stopping container agnhost-container May 20 22:07:55.881: INFO: POD NODE PHASE GRACE CONDITIONS May 20 22:07:55.882: INFO: May 20 22:07:55.885: INFO: Logging node info for node master1 May 20 22:07:55.888: INFO: Node Info: &Node{ObjectMeta:{master1 b016dcf2-74b7-4456-916a-8ca363b9ccc3 42831 0 2022-05-20 20:01:28 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-05-20 20:01:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2022-05-20 20:01:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2022-05-20 20:04:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2022-05-20 20:09:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {nfd-master Update v1 2022-05-20 20:12:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-20 20:07:07 +0000 UTC,LastTransitionTime:2022-05-20 20:07:07 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-20 22:07:55 +0000 UTC,LastTransitionTime:2022-05-20 20:01:26 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-20 22:07:55 +0000 UTC,LastTransitionTime:2022-05-20 20:01:26 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-20 22:07:55 +0000 UTC,LastTransitionTime:2022-05-20 20:01:26 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-20 22:07:55 +0000 UTC,LastTransitionTime:2022-05-20 20:04:22 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e9847a94929d4465bdf672fd6e82b77d,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:a01e5bd5-a73c-4ab6-b80a-cab509b05bc6,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727687197,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:f65735add9b770eec74999948d1a43963106c14a89579d0158e1ec3a1bae070e tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 20 22:07:55.888: INFO: Logging kubelet events for node master1 May 20 22:07:55.890: INFO: Logging pods the kubelet thinks is on node master1 May 20 22:07:55.911: INFO: node-exporter-4rvrg started at 2022-05-20 20:17:21 +0000 UTC (0+2 container statuses recorded) May 20 22:07:55.911: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 20 22:07:55.911: INFO: Container node-exporter ready: true, restart count 0 May 20 22:07:55.911: INFO: kube-scheduler-master1 started at 2022-05-20 20:20:27 +0000 UTC (0+1 container statuses recorded) May 20 22:07:55.911: INFO: Container kube-scheduler ready: true, restart count 1 May 20 22:07:55.911: INFO: kube-apiserver-master1 started at 2022-05-20 20:02:32 +0000 UTC (0+1 container statuses recorded) May 20 22:07:55.911: INFO: Container kube-apiserver ready: true, restart count 0 May 20 22:07:55.911: INFO: kube-controller-manager-master1 started at 2022-05-20 20:10:37 +0000 UTC (0+1 container statuses recorded) May 20 22:07:55.911: INFO: Container kube-controller-manager ready: true, restart count 3 May 20 22:07:55.911: INFO: kube-proxy-rgxh2 started at 2022-05-20 20:03:14 +0000 UTC (0+1 container statuses recorded) May 20 22:07:55.911: INFO: Container kube-proxy ready: true, restart count 2 May 20 22:07:55.911: INFO: kube-flannel-tzq8g started at 2022-05-20 20:04:10 +0000 UTC (1+1 container statuses recorded) May 20 22:07:55.911: INFO: Init container install-cni ready: true, restart count 2 May 20 22:07:55.911: INFO: Container kube-flannel ready: true, restart count 1 May 20 22:07:55.911: INFO: node-feature-discovery-controller-cff799f9f-nq7tc started at 2022-05-20 20:11:58 +0000 UTC (0+1 container statuses recorded) May 20 22:07:55.911: INFO: Container nfd-controller ready: true, restart count 0 May 20 22:07:55.911: INFO: kube-multus-ds-amd64-k8cb6 started at 2022-05-20 20:04:18 +0000 UTC (0+1 container statuses recorded) May 20 22:07:55.911: INFO: Container kube-multus ready: true, restart count 1 May 20 22:07:55.911: INFO: container-registry-65d7c44b96-n94w5 started at 2022-05-20 20:08:47 +0000 UTC (0+2 container statuses recorded) May 20 22:07:55.911: INFO: Container docker-registry ready: true, restart count 0 May 20 22:07:55.911: INFO: Container nginx ready: true, restart count 0 May 20 22:07:55.911: INFO: prometheus-operator-585ccfb458-bl62n started at 2022-05-20 20:17:13 +0000 UTC (0+2 container statuses recorded) May 20 22:07:55.911: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 20 22:07:55.911: INFO: Container prometheus-operator ready: true, restart count 0 May 20 22:07:56.000: INFO: Latency metrics for node master1 May 20 22:07:56.000: INFO: Logging node info for node master2 May 20 22:07:56.002: INFO: Node Info: &Node{ObjectMeta:{master2 ddc04b08-e43a-4e18-a612-aa3bf7f8411e 42832 0 2022-05-20 20:01:56 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-05-20 20:01:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-05-20 20:04:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-05-20 20:04:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-05-20 20:14:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-20 20:07:03 +0000 UTC,LastTransitionTime:2022-05-20 20:07:03 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-20 22:07:55 +0000 UTC,LastTransitionTime:2022-05-20 20:01:56 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-20 22:07:55 +0000 UTC,LastTransitionTime:2022-05-20 20:01:56 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-20 22:07:55 +0000 UTC,LastTransitionTime:2022-05-20 20:01:56 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-20 22:07:55 +0000 UTC,LastTransitionTime:2022-05-20 20:04:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:63d829bfe81540169bcb84ee465e884a,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:fc4aead3-0f07-477a-9f91-3902c50ddf48,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727687197,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 20 22:07:56.002: INFO: Logging kubelet events for node master2 May 20 22:07:56.005: INFO: Logging pods the kubelet thinks is on node master2 May 20 22:07:56.014: INFO: kube-scheduler-master2 started at 2022-05-20 20:02:34 +0000 UTC (0+1 container statuses recorded) May 20 22:07:56.014: INFO: Container kube-scheduler ready: true, restart count 3 May 20 22:07:56.014: INFO: kube-multus-ds-amd64-97fkc started at 2022-05-20 20:04:18 +0000 UTC (0+1 container statuses recorded) May 20 22:07:56.014: INFO: Container kube-multus ready: true, restart count 1 May 20 22:07:56.014: INFO: kube-apiserver-master2 started at 2022-05-20 20:02:34 +0000 UTC (0+1 container statuses recorded) May 20 22:07:56.014: INFO: Container kube-apiserver ready: true, restart count 0 May 20 22:07:56.014: INFO: kube-controller-manager-master2 started at 2022-05-20 20:10:36 +0000 UTC (0+1 container statuses recorded) May 20 22:07:56.014: INFO: Container kube-controller-manager ready: true, restart count 2 May 20 22:07:56.014: INFO: kube-proxy-wfzg2 started at 2022-05-20 20:03:14 +0000 UTC (0+1 container statuses recorded) May 20 22:07:56.014: INFO: Container kube-proxy ready: true, restart count 1 May 20 22:07:56.014: INFO: kube-flannel-wj7hl started at 2022-05-20 20:04:10 +0000 UTC (1+1 container statuses recorded) May 20 22:07:56.014: INFO: Init container install-cni ready: true, restart count 2 May 20 22:07:56.014: INFO: Container kube-flannel ready: true, restart count 1 May 20 22:07:56.014: INFO: coredns-8474476ff8-tjnfw started at 2022-05-20 20:04:46 +0000 UTC (0+1 container statuses recorded) May 20 22:07:56.014: INFO: Container coredns ready: true, restart count 1 May 20 22:07:56.014: INFO: dns-autoscaler-7df78bfcfb-5qj9t started at 2022-05-20 20:04:48 +0000 UTC (0+1 container statuses recorded) May 20 22:07:56.014: INFO: Container autoscaler ready: true, restart count 1 May 20 22:07:56.014: INFO: node-exporter-jfg4p started at 2022-05-20 20:17:20 +0000 UTC (0+2 container statuses recorded) May 20 22:07:56.014: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 20 22:07:56.014: INFO: Container node-exporter ready: true, restart count 0 May 20 22:07:56.097: INFO: Latency metrics for node master2 May 20 22:07:56.097: INFO: Logging node info for node master3 May 20 22:07:56.099: INFO: Node Info: &Node{ObjectMeta:{master3 f42c1bd6-d828-4857-9180-56c73dcc370f 42847 0 2022-05-20 20:02:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-05-20 20:02:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-05-20 20:04:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-05-20 20:04:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-05-20 20:14:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-20 20:07:09 +0000 UTC,LastTransitionTime:2022-05-20 20:07:09 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-20 22:07:55 +0000 UTC,LastTransitionTime:2022-05-20 20:02:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-20 22:07:55 +0000 UTC,LastTransitionTime:2022-05-20 20:02:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-20 22:07:55 +0000 UTC,LastTransitionTime:2022-05-20 20:02:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-20 22:07:55 +0000 UTC,LastTransitionTime:2022-05-20 20:04:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6a2131d65a6f41c3b857ed7d5f7d9f9f,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:2fa6d1c6-058c-482a-97f3-d7e9e817b36a,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727687197,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 20 22:07:56.100: INFO: Logging kubelet events for node master3 May 20 22:07:56.101: INFO: Logging pods the kubelet thinks is on node master3 May 20 22:07:56.110: INFO: kube-apiserver-master3 started at 2022-05-20 20:02:05 +0000 UTC (0+1 container statuses recorded) May 20 22:07:56.110: INFO: Container kube-apiserver ready: true, restart count 0 May 20 22:07:56.110: INFO: kube-multus-ds-amd64-ch8bd started at 2022-05-20 20:04:18 +0000 UTC (0+1 container statuses recorded) May 20 22:07:56.110: INFO: Container kube-multus ready: true, restart count 1 May 20 22:07:56.110: INFO: coredns-8474476ff8-4szxh started at 2022-05-20 20:04:50 +0000 UTC (0+1 container statuses recorded) May 20 22:07:56.110: INFO: Container coredns ready: true, restart count 1 May 20 22:07:56.110: INFO: node-exporter-zgxkr started at 2022-05-20 20:17:20 +0000 UTC (0+2 container statuses recorded) May 20 22:07:56.110: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 20 22:07:56.110: INFO: Container node-exporter ready: true, restart count 0 May 20 22:07:56.110: INFO: kube-controller-manager-master3 started at 2022-05-20 20:10:36 +0000 UTC (0+1 container statuses recorded) May 20 22:07:56.110: INFO: Container kube-controller-manager ready: true, restart count 1 May 20 22:07:56.110: INFO: kube-scheduler-master3 started at 2022-05-20 20:02:33 +0000 UTC (0+1 container statuses recorded) May 20 22:07:56.110: INFO: Container kube-scheduler ready: true, restart count 2 May 20 22:07:56.110: INFO: kube-proxy-rsqzq started at 2022-05-20 20:03:14 +0000 UTC (0+1 container statuses recorded) May 20 22:07:56.110: INFO: Container kube-proxy ready: true, restart count 2 May 20 22:07:56.110: INFO: kube-flannel-bwb5w started at 2022-05-20 20:04:10 +0000 UTC (1+1 container statuses recorded) May 20 22:07:56.110: INFO: Init container install-cni ready: true, restart count 0 May 20 22:07:56.110: INFO: Container kube-flannel ready: true, restart count 2 May 20 22:07:56.197: INFO: Latency metrics for node master3 May 20 22:07:56.197: INFO: Logging node info for node node1 May 20 22:07:56.199: INFO: Node Info: &Node{ObjectMeta:{node1 65c381dd-b6f5-4e67-a327-7a45366d15af 42745 0 2022-05-20 20:03:10 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-20 20:03:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-05-20 20:03:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-05-20 20:04:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-05-20 20:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-05-20 20:15:36 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-05-20 20:15:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-20 20:07:03 +0000 UTC,LastTransitionTime:2022-05-20 20:07:03 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-20 22:07:50 +0000 UTC,LastTransitionTime:2022-05-20 20:03:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-20 22:07:50 +0000 UTC,LastTransitionTime:2022-05-20 20:03:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-20 22:07:50 +0000 UTC,LastTransitionTime:2022-05-20 20:03:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-20 22:07:50 +0000 UTC,LastTransitionTime:2022-05-20 20:04:16 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f2f0a31e38e446cda6cf4c679d8a2ef5,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:c988afd2-8149-4515-9a6f-832552c2ed2d,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003977757,},ContainerImage{Names:[localhost:30500/cmk@sha256:1b6fdb10d02a95904d28fbec7317b3044b913b4572405caf5a5b4f305481ce37 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727687197,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:bcea5fd975bec7f8eb179f896b3a007090d081bd13d974bdb01eedd94cdd88b1 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 20 22:07:56.200: INFO: Logging kubelet events for node node1 May 20 22:07:56.203: INFO: Logging pods the kubelet thinks is on node node1 May 20 22:07:56.219: INFO: nginx-proxy-node1 started at 2022-05-20 20:06:57 +0000 UTC (0+1 container statuses recorded) May 20 22:07:56.219: INFO: Container nginx-proxy ready: true, restart count 2 May 20 22:07:56.219: INFO: prometheus-k8s-0 started at 2022-05-20 20:17:30 +0000 UTC (0+4 container statuses recorded) May 20 22:07:56.219: INFO: Container config-reloader ready: true, restart count 0 May 20 22:07:56.219: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 20 22:07:56.219: INFO: Container grafana ready: true, restart count 0 May 20 22:07:56.219: INFO: Container prometheus ready: true, restart count 1 May 20 22:07:56.219: INFO: simpletest.rc-74rff started at 2022-05-20 22:07:37 +0000 UTC (0+1 container statuses recorded) May 20 22:07:56.219: INFO: Container nginx ready: true, restart count 0 May 20 22:07:56.219: INFO: collectd-875j8 started at 2022-05-20 20:21:17 +0000 UTC (0+3 container statuses recorded) May 20 22:07:56.219: INFO: Container collectd ready: true, restart count 0 May 20 22:07:56.219: INFO: Container collectd-exporter ready: true, restart count 0 May 20 22:07:56.219: INFO: Container rbac-proxy ready: true, restart count 0 May 20 22:07:56.219: INFO: simpletest.rc-cs279 started at 2022-05-20 22:07:37 +0000 UTC (0+1 container statuses recorded) May 20 22:07:56.219: INFO: Container nginx ready: true, restart count 0 May 20 22:07:56.219: INFO: node-feature-discovery-worker-rh55h started at 2022-05-20 20:11:58 +0000 UTC (0+1 container statuses recorded) May 20 22:07:56.219: INFO: Container nfd-worker ready: true, restart count 0 May 20 22:07:56.219: INFO: cmk-init-discover-node1-vkzkd started at 2022-05-20 20:15:33 +0000 UTC (0+3 container statuses recorded) May 20 22:07:56.219: INFO: Container discover ready: false, restart count 0 May 20 22:07:56.219: INFO: Container init ready: false, restart count 0 May 20 22:07:56.219: INFO: Container install ready: false, restart count 0 May 20 22:07:56.219: INFO: simpletest.rc-2887t started at 2022-05-20 22:07:37 +0000 UTC (0+1 container statuses recorded) May 20 22:07:56.219: INFO: Container nginx ready: true, restart count 0 May 20 22:07:56.219: INFO: kube-flannel-2blt7 started at 2022-05-20 20:04:10 +0000 UTC (1+1 container statuses recorded) May 20 22:07:56.219: INFO: Init container install-cni ready: true, restart count 2 May 20 22:07:56.219: INFO: Container kube-flannel ready: true, restart count 3 May 20 22:07:56.219: INFO: test-pod started at 2022-05-20 22:06:36 +0000 UTC (0+1 container statuses recorded) May 20 22:07:56.219: INFO: Container webserver ready: true, restart count 0 May 20 22:07:56.219: INFO: simpletest.rc-9r26z started at 2022-05-20 22:07:37 +0000 UTC (0+1 container statuses recorded) May 20 22:07:56.219: INFO: Container nginx ready: true, restart count 0 May 20 22:07:56.219: INFO: kube-proxy-v8kzq started at 2022-05-20 20:03:14 +0000 UTC (0+1 container statuses recorded) May 20 22:07:56.219: INFO: Container kube-proxy ready: true, restart count 2 May 20 22:07:56.219: INFO: cmk-c5x47 started at 2022-05-20 20:16:15 +0000 UTC (0+2 container statuses recorded) May 20 22:07:56.219: INFO: Container nodereport ready: true, restart count 0 May 20 22:07:56.219: INFO: Container reconcile ready: true, restart count 0 May 20 22:07:56.219: INFO: concurrent-27551407-dc2mt started at 2022-05-20 22:07:00 +0000 UTC (0+1 container statuses recorded) May 20 22:07:56.219: INFO: Container c ready: true, restart count 0 May 20 22:07:56.219: INFO: kube-multus-ds-amd64-krd6m started at 2022-05-20 20:04:18 +0000 UTC (0+1 container statuses recorded) May 20 22:07:56.219: INFO: Container kube-multus ready: true, restart count 1 May 20 22:07:56.220: INFO: kubernetes-dashboard-785dcbb76d-6c2f8 started at 2022-05-20 20:04:50 +0000 UTC (0+1 container statuses recorded) May 20 22:07:56.220: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 20 22:07:56.220: INFO: sample-webhook-deployment-78988fc6cd-tkzdt started at 2022-05-20 22:07:51 +0000 UTC (0+1 container statuses recorded) May 20 22:07:56.220: INFO: Container sample-webhook ready: true, restart count 0 May 20 22:07:56.220: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-qn9gl started at 2022-05-20 20:13:08 +0000 UTC (0+1 container statuses recorded) May 20 22:07:56.220: INFO: Container kube-sriovdp ready: true, restart count 0 May 20 22:07:56.220: INFO: node-exporter-czwvh started at 2022-05-20 20:17:20 +0000 UTC (0+2 container statuses recorded) May 20 22:07:56.220: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 20 22:07:56.220: INFO: Container node-exporter ready: true, restart count 0 May 20 22:07:56.450: INFO: Latency metrics for node node1 May 20 22:07:56.450: INFO: Logging node info for node node2 May 20 22:07:56.452: INFO: Node Info: &Node{ObjectMeta:{node2 a0e0a426-876d-4419-96e4-c6977ef3393c 42709 0 2022-05-20 20:03:09 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-20 20:03:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-05-20 20:03:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-05-20 20:04:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-05-20 20:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-05-20 20:15:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-05-20 20:16:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-20 20:07:03 +0000 UTC,LastTransitionTime:2022-05-20 20:07:03 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-20 22:07:48 +0000 UTC,LastTransitionTime:2022-05-20 20:03:09 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-20 22:07:48 +0000 UTC,LastTransitionTime:2022-05-20 20:03:09 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-20 22:07:48 +0000 UTC,LastTransitionTime:2022-05-20 20:03:09 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-20 22:07:48 +0000 UTC,LastTransitionTime:2022-05-20 20:07:08 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a6deb87c5d6d4ca89be50c8f447a0e3c,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:67af2183-25fe-4024-95ea-e80edf7c8695,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727687197,},ContainerImage{Names:[localhost:30500/cmk@sha256:1b6fdb10d02a95904d28fbec7317b3044b913b4572405caf5a5b4f305481ce37 localhost:30500/cmk:v1.5.1],SizeBytes:727687197,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:bcea5fd975bec7f8eb179f896b3a007090d081bd13d974bdb01eedd94cdd88b1 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:f65735add9b770eec74999948d1a43963106c14a89579d0158e1ec3a1bae070e localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 20 22:07:56.453: INFO: Logging kubelet events for node node2 May 20 22:07:56.456: INFO: Logging pods the kubelet thinks is on node node2 May 20 22:07:56.471: INFO: nginx-proxy-node2 started at 2022-05-20 20:03:09 +0000 UTC (0+1 container statuses recorded) May 20 22:07:56.471: INFO: Container nginx-proxy ready: true, restart count 2 May 20 22:07:56.471: INFO: kube-proxy-rg2fp started at 2022-05-20 20:03:14 +0000 UTC (0+1 container statuses recorded) May 20 22:07:56.471: INFO: Container kube-proxy ready: true, restart count 2 May 20 22:07:56.471: INFO: kube-flannel-jpmpd started at 2022-05-20 20:04:10 +0000 UTC (1+1 container statuses recorded) May 20 22:07:56.471: INFO: Init container install-cni ready: true, restart count 1 May 20 22:07:56.471: INFO: Container kube-flannel ready: true, restart count 2 May 20 22:07:56.471: INFO: simpletest.rc-xnb8b started at 2022-05-20 22:07:37 +0000 UTC (0+1 container statuses recorded) May 20 22:07:56.471: INFO: Container nginx ready: true, restart count 0 May 20 22:07:56.471: INFO: node-feature-discovery-worker-nphk9 started at 2022-05-20 20:11:58 +0000 UTC (0+1 container statuses recorded) May 20 22:07:56.471: INFO: Container nfd-worker ready: true, restart count 0 May 20 22:07:56.471: INFO: to-be-attached-pod started at 2022-05-20 22:07:52 +0000 UTC (0+1 container statuses recorded) May 20 22:07:56.471: INFO: Container container1 ready: false, restart count 0 May 20 22:07:56.471: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-wl7nk started at 2022-05-20 20:13:08 +0000 UTC (0+1 container statuses recorded) May 20 22:07:56.471: INFO: Container kube-sriovdp ready: true, restart count 0 May 20 22:07:56.471: INFO: cmk-9hxtl started at 2022-05-20 20:16:16 +0000 UTC (0+2 container statuses recorded) May 20 22:07:56.471: INFO: Container nodereport ready: true, restart count 0 May 20 22:07:56.471: INFO: Container reconcile ready: true, restart count 0 May 20 22:07:56.471: INFO: node-exporter-vm24n started at 2022-05-20 20:17:20 +0000 UTC (0+2 container statuses recorded) May 20 22:07:56.471: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 20 22:07:56.471: INFO: Container node-exporter ready: true, restart count 0 May 20 22:07:56.471: INFO: simpletest.rc-9pkvv started at 2022-05-20 22:07:37 +0000 UTC (0+1 container statuses recorded) May 20 22:07:56.471: INFO: Container nginx ready: true, restart count 0 May 20 22:07:56.471: INFO: forbid-27551406-k5fwc started at 2022-05-20 22:06:00 +0000 UTC (0+1 container statuses recorded) May 20 22:07:56.471: INFO: Container c ready: true, restart count 0 May 20 22:07:56.471: INFO: cmk-webhook-6c9d5f8578-5kbbc started at 2022-05-20 20:16:16 +0000 UTC (0+1 container statuses recorded) May 20 22:07:56.471: INFO: Container cmk-webhook ready: true, restart count 0 May 20 22:07:56.471: INFO: simpletest.rc-dpplk started at 2022-05-20 22:07:37 +0000 UTC (0+1 container statuses recorded) May 20 22:07:56.471: INFO: Container nginx ready: true, restart count 0 May 20 22:07:56.471: INFO: simpletest.rc-7wzh7 started at 2022-05-20 22:07:37 +0000 UTC (0+1 container statuses recorded) May 20 22:07:56.471: INFO: Container nginx ready: true, restart count 0 May 20 22:07:56.471: INFO: kube-multus-ds-amd64-p22zp started at 2022-05-20 20:04:18 +0000 UTC (0+1 container statuses recorded) May 20 22:07:56.471: INFO: Container kube-multus ready: true, restart count 1 May 20 22:07:56.471: INFO: kubernetes-metrics-scraper-5558854cb-66r9g started at 2022-05-20 20:04:50 +0000 UTC (0+1 container statuses recorded) May 20 22:07:56.472: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 20 22:07:56.472: INFO: tas-telemetry-aware-scheduling-84ff454dfb-ddzzd started at 2022-05-20 20:20:26 +0000 UTC (0+1 container statuses recorded) May 20 22:07:56.472: INFO: Container tas-extender ready: true, restart count 0 May 20 22:07:56.472: INFO: simpletest.rc-nnghj started at 2022-05-20 22:07:37 +0000 UTC (0+1 container statuses recorded) May 20 22:07:56.472: INFO: Container nginx ready: true, restart count 0 May 20 22:07:56.472: INFO: sample-webhook-deployment-78988fc6cd-t8ftz started at 2022-05-20 22:07:37 +0000 UTC (0+1 container statuses recorded) May 20 22:07:56.472: INFO: Container sample-webhook ready: true, restart count 0 May 20 22:07:56.472: INFO: cmk-init-discover-node2-b7gw4 started at 2022-05-20 20:15:53 +0000 UTC (0+3 container statuses recorded) May 20 22:07:56.472: INFO: Container discover ready: false, restart count 0 May 20 22:07:56.472: INFO: Container init ready: false, restart count 0 May 20 22:07:56.472: INFO: Container install ready: false, restart count 0 May 20 22:07:56.472: INFO: collectd-h4pzk started at 2022-05-20 20:21:17 +0000 UTC (0+3 container statuses recorded) May 20 22:07:56.472: INFO: Container collectd ready: true, restart count 0 May 20 22:07:56.472: INFO: Container collectd-exporter ready: true, restart count 0 May 20 22:07:56.472: INFO: Container rbac-proxy ready: true, restart count 0 May 20 22:07:56.472: INFO: busybox-1d29b80b-166a-4436-a3e3-d054a65734ae started at 2022-05-20 22:07:21 +0000 UTC (0+1 container statuses recorded) May 20 22:07:56.472: INFO: Container busybox ready: true, restart count 0 May 20 22:07:56.472: INFO: simpletest.rc-9jzxt started at 2022-05-20 22:07:37 +0000 UTC (0+1 container statuses recorded) May 20 22:07:56.472: INFO: Container nginx ready: true, restart count 0 May 20 22:07:56.932: INFO: Latency metrics for node node2 May 20 22:07:56.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9879" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [159.407 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 20 22:07:41.196: Unexpected error: <*errors.errorString | 0xc003e86720>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32556 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32556 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2576 ------------------------------ {"msg":"FAILED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":12,"skipped":155,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:07:37.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication May 20 22:07:37.590: INFO: role binding webhook-auth-reader already exists STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 20 22:07:37.601: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 20 22:07:39.611: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681257, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681257, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681257, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681257, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 20 22:07:41.614: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681257, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681257, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681257, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681257, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 20 22:07:43.616: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681257, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681257, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681257, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681257, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 20 22:07:45.617: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681257, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681257, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681257, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681257, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 20 22:07:47.614: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681257, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681257, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681257, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681257, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 20 22:07:49.616: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681257, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681257, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681257, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681257, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 20 22:07:52.622: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook May 20 22:07:58.696: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=webhook-7339 attach --namespace=webhook-7339 to-be-attached-pod -i -c=container1' May 20 22:07:58.890: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:07:58.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7339" for this suite. STEP: Destroying namespace "webhook-7339-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:21.741 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":-1,"completed":10,"skipped":136,"failed":0} S ------------------------------ [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:06:10.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W0520 22:06:10.746740 26 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should schedule multiple jobs concurrently [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a cronjob STEP: Ensuring more than one job is running at a time STEP: Ensuring at least two running jobs exists by listing jobs explicitly STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:08:00.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-7198" for this suite. • [SLOW TEST:110.048 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should schedule multiple jobs concurrently [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","total":-1,"completed":14,"skipped":291,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":378,"failed":0} [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:07:51.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 20 22:07:51.512: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 20 22:07:53.521: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681271, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681271, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681271, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681271, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 20 22:07:56.537: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 20 22:07:56.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-8902-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:08:04.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8386" for this suite. STEP: Destroying namespace "webhook-8386-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:13.509 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":-1,"completed":23,"skipped":378,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:07:56.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-upd-4edb2255-6e51-448d-b648-07aa6d317a6f STEP: Creating the pod May 20 22:07:56.577: INFO: The status of Pod pod-configmaps-d80e335d-7c5e-42e1-8102-1e7277061b79 is Pending, waiting for it to be Running (with Ready = true) May 20 22:07:58.581: INFO: The status of Pod pod-configmaps-d80e335d-7c5e-42e1-8102-1e7277061b79 is Pending, waiting for it to be Running (with Ready = true) May 20 22:08:00.583: INFO: The status of Pod pod-configmaps-d80e335d-7c5e-42e1-8102-1e7277061b79 is Running (Ready = true) STEP: Updating configmap configmap-test-upd-4edb2255-6e51-448d-b648-07aa6d317a6f STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:08:04.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6898" for this suite. • [SLOW TEST:8.164 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":398,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:07:56.959: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name cm-test-opt-del-c80ee50d-f243-43d7-9f44-84d24e6f150e STEP: Creating configMap with name cm-test-opt-upd-73fdc957-1785-45fa-a08e-8273c010dbbd STEP: Creating the pod May 20 22:07:57.011: INFO: The status of Pod pod-configmaps-22ca96a8-1c95-4d3e-9316-4a770d39c99b is Pending, waiting for it to be Running (with Ready = true) May 20 22:07:59.014: INFO: The status of Pod pod-configmaps-22ca96a8-1c95-4d3e-9316-4a770d39c99b is Pending, waiting for it to be Running (with Ready = true) May 20 22:08:01.014: INFO: The status of Pod pod-configmaps-22ca96a8-1c95-4d3e-9316-4a770d39c99b is Pending, waiting for it to be Running (with Ready = true) May 20 22:08:03.016: INFO: The status of Pod pod-configmaps-22ca96a8-1c95-4d3e-9316-4a770d39c99b is Pending, waiting for it to be Running (with Ready = true) May 20 22:08:05.015: INFO: The status of Pod pod-configmaps-22ca96a8-1c95-4d3e-9316-4a770d39c99b is Running (Ready = true) STEP: Deleting configmap cm-test-opt-del-c80ee50d-f243-43d7-9f44-84d24e6f150e STEP: Updating configmap cm-test-opt-upd-73fdc957-1785-45fa-a08e-8273c010dbbd STEP: Creating configMap with name cm-test-opt-create-7dc6062e-d2ed-4c9e-aad8-b96bc68d26f3 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:08:07.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8289" for this suite. • [SLOW TEST:10.117 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":159,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:08:07.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename runtimeclass STEP: Waiting for a default service account to be provisioned in namespace [It] should support RuntimeClasses API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/node.k8s.io STEP: getting /apis/node.k8s.io/v1 STEP: creating STEP: watching May 20 22:08:07.164: INFO: starting watch STEP: getting STEP: listing STEP: patching STEP: updating May 20 22:08:07.190: INFO: waiting for watch events with expected annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:08:07.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "runtimeclass-5151" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance]","total":-1,"completed":14,"skipped":181,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} S ------------------------------ [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:03:10.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W0520 22:03:10.400356 27 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should not schedule jobs when suspended [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a suspended cronjob STEP: Ensuring no jobs are scheduled STEP: Ensuring no job exists by listing jobs explicitly STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:08:10.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-405" for this suite. • [SLOW TEST:300.054 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should not schedule jobs when suspended [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","total":-1,"completed":8,"skipped":92,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:08:07.224: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Given a Pod with a 'name' label pod-adoption-release is created May 20 22:08:07.258: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) May 20 22:08:09.261: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) May 20 22:08:11.262: INFO: The status of Pod pod-adoption-release is Running (Ready = true) STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 20 22:08:12.277: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:08:13.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-3437" for this suite. • [SLOW TEST:6.079 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":-1,"completed":15,"skipped":182,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SS ------------------------------ [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:08:13.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:08:13.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-3257" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","total":-1,"completed":16,"skipped":184,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:08:10.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-eb022202-b709-4b66-98b9-09bda1616826 STEP: Creating a pod to test consume configMaps May 20 22:08:10.505: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-50a34316-45d3-4d39-a2be-b6f61a9da7fb" in namespace "projected-3911" to be "Succeeded or Failed" May 20 22:08:10.508: INFO: Pod "pod-projected-configmaps-50a34316-45d3-4d39-a2be-b6f61a9da7fb": Phase="Pending", Reason="", readiness=false. Elapsed: 3.143485ms May 20 22:08:12.512: INFO: Pod "pod-projected-configmaps-50a34316-45d3-4d39-a2be-b6f61a9da7fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006689419s May 20 22:08:14.517: INFO: Pod "pod-projected-configmaps-50a34316-45d3-4d39-a2be-b6f61a9da7fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012005569s STEP: Saw pod success May 20 22:08:14.517: INFO: Pod "pod-projected-configmaps-50a34316-45d3-4d39-a2be-b6f61a9da7fb" satisfied condition "Succeeded or Failed" May 20 22:08:14.519: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-50a34316-45d3-4d39-a2be-b6f61a9da7fb container agnhost-container: STEP: delete the pod May 20 22:08:14.533: INFO: Waiting for pod pod-projected-configmaps-50a34316-45d3-4d39-a2be-b6f61a9da7fb to disappear May 20 22:08:14.535: INFO: Pod pod-projected-configmaps-50a34316-45d3-4d39-a2be-b6f61a9da7fb no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:08:14.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3911" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":109,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:08:04.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics May 20 22:08:14.742: INFO: The status of Pod kube-controller-manager-master3 is Running (Ready = true) May 20 22:08:14.809: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:08:14.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7864" for this suite. • [SLOW TEST:10.162 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":-1,"completed":24,"skipped":387,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:07:21.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod busybox-1d29b80b-166a-4436-a3e3-d054a65734ae in namespace container-probe-5268 May 20 22:07:25.850: INFO: Started pod busybox-1d29b80b-166a-4436-a3e3-d054a65734ae in namespace container-probe-5268 STEP: checking the pod's current state and verifying that restartCount is present May 20 22:07:25.853: INFO: Initial restart count of pod busybox-1d29b80b-166a-4436-a3e3-d054a65734ae is 0 May 20 22:08:15.969: INFO: Restart count of pod container-probe-5268/busybox-1d29b80b-166a-4436-a3e3-d054a65734ae is now 1 (50.115613094s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:08:15.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5268" for this suite. • [SLOW TEST:54.179 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":518,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:08:15.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename discovery STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/discovery.go:39 STEP: Setting up server cert [It] should validate PreferredVersion for each APIGroup [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 20 22:08:16.282: INFO: Checking APIGroup: apiregistration.k8s.io May 20 22:08:16.284: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 May 20 22:08:16.284: INFO: Versions found [{apiregistration.k8s.io/v1 v1} {apiregistration.k8s.io/v1beta1 v1beta1}] May 20 22:08:16.284: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 May 20 22:08:16.284: INFO: Checking APIGroup: apps May 20 22:08:16.285: INFO: PreferredVersion.GroupVersion: apps/v1 May 20 22:08:16.285: INFO: Versions found [{apps/v1 v1}] May 20 22:08:16.285: INFO: apps/v1 matches apps/v1 May 20 22:08:16.285: INFO: Checking APIGroup: events.k8s.io May 20 22:08:16.286: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 May 20 22:08:16.286: INFO: Versions found [{events.k8s.io/v1 v1} {events.k8s.io/v1beta1 v1beta1}] May 20 22:08:16.286: INFO: events.k8s.io/v1 matches events.k8s.io/v1 May 20 22:08:16.286: INFO: Checking APIGroup: authentication.k8s.io May 20 22:08:16.286: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 May 20 22:08:16.286: INFO: Versions found [{authentication.k8s.io/v1 v1} {authentication.k8s.io/v1beta1 v1beta1}] May 20 22:08:16.286: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 May 20 22:08:16.286: INFO: Checking APIGroup: authorization.k8s.io May 20 22:08:16.287: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 May 20 22:08:16.287: INFO: Versions found [{authorization.k8s.io/v1 v1} {authorization.k8s.io/v1beta1 v1beta1}] May 20 22:08:16.287: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 May 20 22:08:16.287: INFO: Checking APIGroup: autoscaling May 20 22:08:16.288: INFO: PreferredVersion.GroupVersion: autoscaling/v1 May 20 22:08:16.288: INFO: Versions found [{autoscaling/v1 v1} {autoscaling/v2beta1 v2beta1} {autoscaling/v2beta2 v2beta2}] May 20 22:08:16.288: INFO: autoscaling/v1 matches autoscaling/v1 May 20 22:08:16.288: INFO: Checking APIGroup: batch May 20 22:08:16.289: INFO: PreferredVersion.GroupVersion: batch/v1 May 20 22:08:16.289: INFO: Versions found [{batch/v1 v1} {batch/v1beta1 v1beta1}] May 20 22:08:16.289: INFO: batch/v1 matches batch/v1 May 20 22:08:16.289: INFO: Checking APIGroup: certificates.k8s.io May 20 22:08:16.290: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 May 20 22:08:16.290: INFO: Versions found [{certificates.k8s.io/v1 v1} {certificates.k8s.io/v1beta1 v1beta1}] May 20 22:08:16.290: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 May 20 22:08:16.290: INFO: Checking APIGroup: networking.k8s.io May 20 22:08:16.291: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 May 20 22:08:16.291: INFO: Versions found [{networking.k8s.io/v1 v1} {networking.k8s.io/v1beta1 v1beta1}] May 20 22:08:16.291: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 May 20 22:08:16.291: INFO: Checking APIGroup: extensions May 20 22:08:16.292: INFO: PreferredVersion.GroupVersion: extensions/v1beta1 May 20 22:08:16.292: INFO: Versions found [{extensions/v1beta1 v1beta1}] May 20 22:08:16.292: INFO: extensions/v1beta1 matches extensions/v1beta1 May 20 22:08:16.292: INFO: Checking APIGroup: policy May 20 22:08:16.292: INFO: PreferredVersion.GroupVersion: policy/v1 May 20 22:08:16.293: INFO: Versions found [{policy/v1 v1} {policy/v1beta1 v1beta1}] May 20 22:08:16.293: INFO: policy/v1 matches policy/v1 May 20 22:08:16.293: INFO: Checking APIGroup: rbac.authorization.k8s.io May 20 22:08:16.293: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 May 20 22:08:16.293: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1} {rbac.authorization.k8s.io/v1beta1 v1beta1}] May 20 22:08:16.293: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 May 20 22:08:16.293: INFO: Checking APIGroup: storage.k8s.io May 20 22:08:16.294: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 May 20 22:08:16.294: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] May 20 22:08:16.294: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 May 20 22:08:16.294: INFO: Checking APIGroup: admissionregistration.k8s.io May 20 22:08:16.295: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 May 20 22:08:16.295: INFO: Versions found [{admissionregistration.k8s.io/v1 v1} {admissionregistration.k8s.io/v1beta1 v1beta1}] May 20 22:08:16.295: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 May 20 22:08:16.295: INFO: Checking APIGroup: apiextensions.k8s.io May 20 22:08:16.297: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 May 20 22:08:16.297: INFO: Versions found [{apiextensions.k8s.io/v1 v1} {apiextensions.k8s.io/v1beta1 v1beta1}] May 20 22:08:16.297: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 May 20 22:08:16.297: INFO: Checking APIGroup: scheduling.k8s.io May 20 22:08:16.298: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 May 20 22:08:16.298: INFO: Versions found [{scheduling.k8s.io/v1 v1} {scheduling.k8s.io/v1beta1 v1beta1}] May 20 22:08:16.298: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 May 20 22:08:16.298: INFO: Checking APIGroup: coordination.k8s.io May 20 22:08:16.298: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 May 20 22:08:16.298: INFO: Versions found [{coordination.k8s.io/v1 v1} {coordination.k8s.io/v1beta1 v1beta1}] May 20 22:08:16.298: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 May 20 22:08:16.298: INFO: Checking APIGroup: node.k8s.io May 20 22:08:16.299: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1 May 20 22:08:16.299: INFO: Versions found [{node.k8s.io/v1 v1} {node.k8s.io/v1beta1 v1beta1}] May 20 22:08:16.299: INFO: node.k8s.io/v1 matches node.k8s.io/v1 May 20 22:08:16.299: INFO: Checking APIGroup: discovery.k8s.io May 20 22:08:16.300: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1 May 20 22:08:16.300: INFO: Versions found [{discovery.k8s.io/v1 v1} {discovery.k8s.io/v1beta1 v1beta1}] May 20 22:08:16.300: INFO: discovery.k8s.io/v1 matches discovery.k8s.io/v1 May 20 22:08:16.300: INFO: Checking APIGroup: flowcontrol.apiserver.k8s.io May 20 22:08:16.301: INFO: PreferredVersion.GroupVersion: flowcontrol.apiserver.k8s.io/v1beta1 May 20 22:08:16.301: INFO: Versions found [{flowcontrol.apiserver.k8s.io/v1beta1 v1beta1}] May 20 22:08:16.301: INFO: flowcontrol.apiserver.k8s.io/v1beta1 matches flowcontrol.apiserver.k8s.io/v1beta1 May 20 22:08:16.301: INFO: Checking APIGroup: intel.com May 20 22:08:16.303: INFO: PreferredVersion.GroupVersion: intel.com/v1 May 20 22:08:16.303: INFO: Versions found [{intel.com/v1 v1}] May 20 22:08:16.303: INFO: intel.com/v1 matches intel.com/v1 May 20 22:08:16.303: INFO: Checking APIGroup: k8s.cni.cncf.io May 20 22:08:16.304: INFO: PreferredVersion.GroupVersion: k8s.cni.cncf.io/v1 May 20 22:08:16.304: INFO: Versions found [{k8s.cni.cncf.io/v1 v1}] May 20 22:08:16.304: INFO: k8s.cni.cncf.io/v1 matches k8s.cni.cncf.io/v1 May 20 22:08:16.304: INFO: Checking APIGroup: monitoring.coreos.com May 20 22:08:16.305: INFO: PreferredVersion.GroupVersion: monitoring.coreos.com/v1 May 20 22:08:16.305: INFO: Versions found [{monitoring.coreos.com/v1 v1} {monitoring.coreos.com/v1alpha1 v1alpha1}] May 20 22:08:16.305: INFO: monitoring.coreos.com/v1 matches monitoring.coreos.com/v1 May 20 22:08:16.305: INFO: Checking APIGroup: telemetry.intel.com May 20 22:08:16.306: INFO: PreferredVersion.GroupVersion: telemetry.intel.com/v1alpha1 May 20 22:08:16.306: INFO: Versions found [{telemetry.intel.com/v1alpha1 v1alpha1}] May 20 22:08:16.306: INFO: telemetry.intel.com/v1alpha1 matches telemetry.intel.com/v1alpha1 May 20 22:08:16.306: INFO: Checking APIGroup: custom.metrics.k8s.io May 20 22:08:16.307: INFO: PreferredVersion.GroupVersion: custom.metrics.k8s.io/v1beta1 May 20 22:08:16.307: INFO: Versions found [{custom.metrics.k8s.io/v1beta1 v1beta1}] May 20 22:08:16.307: INFO: custom.metrics.k8s.io/v1beta1 matches custom.metrics.k8s.io/v1beta1 [AfterEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:08:16.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "discovery-3854" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":-1,"completed":28,"skipped":522,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:08:04.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] Replace and Patch tests [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 20 22:08:04.768: INFO: Pod name sample-pod: Found 0 pods out of 1 May 20 22:08:09.773: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running STEP: Scaling up "test-rs" replicaset May 20 22:08:09.779: INFO: Updating replica set "test-rs" STEP: patching the ReplicaSet May 20 22:08:09.784: INFO: observed ReplicaSet test-rs in namespace replicaset-4287 with ReadyReplicas 1, AvailableReplicas 1 May 20 22:08:09.793: INFO: observed ReplicaSet test-rs in namespace replicaset-4287 with ReadyReplicas 1, AvailableReplicas 1 May 20 22:08:09.802: INFO: observed ReplicaSet test-rs in namespace replicaset-4287 with ReadyReplicas 1, AvailableReplicas 1 May 20 22:08:09.805: INFO: observed ReplicaSet test-rs in namespace replicaset-4287 with ReadyReplicas 1, AvailableReplicas 1 May 20 22:08:14.533: INFO: observed ReplicaSet test-rs in namespace replicaset-4287 with ReadyReplicas 2, AvailableReplicas 2 May 20 22:08:16.738: INFO: observed Replicaset test-rs in namespace replicaset-4287 with ReadyReplicas 3 found true [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:08:16.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-4287" for this suite. • [SLOW TEST:12.010 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Replace and Patch tests [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet Replace and Patch tests [Conformance]","total":-1,"completed":17,"skipped":416,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:07:37.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics May 20 22:08:17.341: INFO: The status of Pod kube-controller-manager-master3 is Running (Ready = true) May 20 22:08:17.404: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: May 20 22:08:17.404: INFO: Deleting pod "simpletest.rc-2887t" in namespace "gc-2402" May 20 22:08:17.409: INFO: Deleting pod "simpletest.rc-74rff" in namespace "gc-2402" May 20 22:08:17.416: INFO: Deleting pod "simpletest.rc-7wzh7" in namespace "gc-2402" May 20 22:08:17.423: INFO: Deleting pod "simpletest.rc-9jzxt" in namespace "gc-2402" May 20 22:08:17.430: INFO: Deleting pod "simpletest.rc-9pkvv" in namespace "gc-2402" May 20 22:08:17.437: INFO: Deleting pod "simpletest.rc-9r26z" in namespace "gc-2402" May 20 22:08:17.443: INFO: Deleting pod "simpletest.rc-cs279" in namespace "gc-2402" May 20 22:08:17.448: INFO: Deleting pod "simpletest.rc-dpplk" in namespace "gc-2402" May 20 22:08:17.454: INFO: Deleting pod "simpletest.rc-nnghj" in namespace "gc-2402" May 20 22:08:17.459: INFO: Deleting pod "simpletest.rc-xnb8b" in namespace "gc-2402" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:08:17.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2402" for this suite. • [SLOW TEST:40.201 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":-1,"completed":33,"skipped":600,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:08:14.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: Gathering metrics May 20 22:08:20.750: INFO: The status of Pod kube-controller-manager-master3 is Running (Ready = true) May 20 22:08:20.815: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:08:20.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2195" for this suite. • [SLOW TEST:6.149 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":-1,"completed":10,"skipped":175,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:08:00.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-5140 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-5140 STEP: creating replication controller externalsvc in namespace services-5140 I0520 22:08:00.880691 26 runners.go:190] Created replication controller with name: externalsvc, namespace: services-5140, replica count: 2 I0520 22:08:03.931907 26 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0520 22:08:06.932978 26 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName May 20 22:08:06.944: INFO: Creating new exec pod May 20 22:08:10.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5140 exec execpod8f8br -- /bin/sh -x -c nslookup clusterip-service.services-5140.svc.cluster.local' May 20 22:08:11.404: INFO: stderr: "+ nslookup clusterip-service.services-5140.svc.cluster.local\n" May 20 22:08:11.404: INFO: stdout: "Server:\t\t10.233.0.3\nAddress:\t10.233.0.3#53\n\nclusterip-service.services-5140.svc.cluster.local\tcanonical name = externalsvc.services-5140.svc.cluster.local.\nName:\texternalsvc.services-5140.svc.cluster.local\nAddress: 10.233.24.144\n\n" STEP: deleting ReplicationController externalsvc in namespace services-5140, will wait for the garbage collector to delete the pods May 20 22:08:11.463: INFO: Deleting ReplicationController externalsvc took: 5.887525ms May 20 22:08:11.563: INFO: Terminating ReplicationController externalsvc pods took: 100.261706ms May 20 22:08:21.573: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:08:21.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5140" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:20.746 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":-1,"completed":15,"skipped":324,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:08:16.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin May 20 22:08:16.798: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b2369ace-f7f8-44be-b39d-90c61db50130" in namespace "downward-api-4416" to be "Succeeded or Failed" May 20 22:08:16.801: INFO: Pod "downwardapi-volume-b2369ace-f7f8-44be-b39d-90c61db50130": Phase="Pending", Reason="", readiness=false. Elapsed: 2.122536ms May 20 22:08:18.806: INFO: Pod "downwardapi-volume-b2369ace-f7f8-44be-b39d-90c61db50130": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007547129s May 20 22:08:20.813: INFO: Pod "downwardapi-volume-b2369ace-f7f8-44be-b39d-90c61db50130": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014240442s May 20 22:08:22.816: INFO: Pod "downwardapi-volume-b2369ace-f7f8-44be-b39d-90c61db50130": Phase="Pending", Reason="", readiness=false. Elapsed: 6.017618491s May 20 22:08:24.820: INFO: Pod "downwardapi-volume-b2369ace-f7f8-44be-b39d-90c61db50130": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.02141996s STEP: Saw pod success May 20 22:08:24.820: INFO: Pod "downwardapi-volume-b2369ace-f7f8-44be-b39d-90c61db50130" satisfied condition "Succeeded or Failed" May 20 22:08:24.823: INFO: Trying to get logs from node node1 pod downwardapi-volume-b2369ace-f7f8-44be-b39d-90c61db50130 container client-container: STEP: delete the pod May 20 22:08:24.837: INFO: Waiting for pod downwardapi-volume-b2369ace-f7f8-44be-b39d-90c61db50130 to disappear May 20 22:08:24.839: INFO: Pod downwardapi-volume-b2369ace-f7f8-44be-b39d-90c61db50130 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:08:24.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4416" for this suite. • [SLOW TEST:8.081 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":423,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:08:13.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 20 22:08:13.439: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-36cdca75-9e25-48a3-baa6-5a73d13274d8" in namespace "security-context-test-5042" to be "Succeeded or Failed" May 20 22:08:13.445: INFO: Pod "alpine-nnp-false-36cdca75-9e25-48a3-baa6-5a73d13274d8": Phase="Pending", Reason="", readiness=false. Elapsed: 5.268928ms May 20 22:08:15.448: INFO: Pod "alpine-nnp-false-36cdca75-9e25-48a3-baa6-5a73d13274d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008637985s May 20 22:08:17.452: INFO: Pod "alpine-nnp-false-36cdca75-9e25-48a3-baa6-5a73d13274d8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01268641s May 20 22:08:19.457: INFO: Pod "alpine-nnp-false-36cdca75-9e25-48a3-baa6-5a73d13274d8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.017122485s May 20 22:08:21.461: INFO: Pod "alpine-nnp-false-36cdca75-9e25-48a3-baa6-5a73d13274d8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.02119742s May 20 22:08:23.466: INFO: Pod "alpine-nnp-false-36cdca75-9e25-48a3-baa6-5a73d13274d8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.02627797s May 20 22:08:25.470: INFO: Pod "alpine-nnp-false-36cdca75-9e25-48a3-baa6-5a73d13274d8": Phase="Pending", Reason="", readiness=false. Elapsed: 12.030958523s May 20 22:08:27.475: INFO: Pod "alpine-nnp-false-36cdca75-9e25-48a3-baa6-5a73d13274d8": Phase="Pending", Reason="", readiness=false. Elapsed: 14.035856533s May 20 22:08:29.481: INFO: Pod "alpine-nnp-false-36cdca75-9e25-48a3-baa6-5a73d13274d8": Phase="Pending", Reason="", readiness=false. Elapsed: 16.041948475s May 20 22:08:31.486: INFO: Pod "alpine-nnp-false-36cdca75-9e25-48a3-baa6-5a73d13274d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.04664813s May 20 22:08:31.486: INFO: Pod "alpine-nnp-false-36cdca75-9e25-48a3-baa6-5a73d13274d8" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:08:31.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5042" for this suite. • [SLOW TEST:18.102 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when creating containers with AllowPrivilegeEscalation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":192,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:07:58.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Performing setup for networking test in namespace pod-network-test-4274 STEP: creating a selector STEP: Creating the service pods in kubernetes May 20 22:07:58.964: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 20 22:07:58.994: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 20 22:08:00.998: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 20 22:08:02.998: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 22:08:04.999: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 22:08:06.998: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 22:08:09.002: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 22:08:10.997: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 22:08:12.999: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 22:08:14.999: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 22:08:16.999: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 22:08:19.000: INFO: The status of Pod netserver-0 is Running (Ready = true) May 20 22:08:19.004: INFO: The status of Pod netserver-1 is Running (Ready = false) May 20 22:08:21.008: INFO: The status of Pod netserver-1 is Running (Ready = false) May 20 22:08:23.007: INFO: The status of Pod netserver-1 is Running (Ready = false) May 20 22:08:25.011: INFO: The status of Pod netserver-1 is Running (Ready = false) May 20 22:08:27.008: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 20 22:08:33.048: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 May 20 22:08:33.048: INFO: Going to poll 10.244.4.14 on port 8080 at least 0 times, with a maximum of 34 tries before failing May 20 22:08:33.051: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.4.14:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-4274 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 22:08:33.051: INFO: >>> kubeConfig: /root/.kube/config May 20 22:08:33.136: INFO: Found all 1 expected endpoints: [netserver-0] May 20 22:08:33.136: INFO: Going to poll 10.244.3.66 on port 8080 at least 0 times, with a maximum of 34 tries before failing May 20 22:08:33.139: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.3.66:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-4274 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 22:08:33.139: INFO: >>> kubeConfig: /root/.kube/config May 20 22:08:33.265: INFO: Found all 1 expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:08:33.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4274" for this suite. • [SLOW TEST:34.341 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:08:21.679: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-map-f145a542-fad2-48cb-920c-f52b475e4ff3 STEP: Creating a pod to test consume secrets May 20 22:08:21.718: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f9ba5e96-240a-4e13-9a30-6eccf16c21ad" in namespace "projected-2419" to be "Succeeded or Failed" May 20 22:08:21.720: INFO: Pod "pod-projected-secrets-f9ba5e96-240a-4e13-9a30-6eccf16c21ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213778ms May 20 22:08:23.724: INFO: Pod "pod-projected-secrets-f9ba5e96-240a-4e13-9a30-6eccf16c21ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005981086s May 20 22:08:25.728: INFO: Pod "pod-projected-secrets-f9ba5e96-240a-4e13-9a30-6eccf16c21ad": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009790654s May 20 22:08:27.731: INFO: Pod "pod-projected-secrets-f9ba5e96-240a-4e13-9a30-6eccf16c21ad": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013050026s May 20 22:08:29.736: INFO: Pod "pod-projected-secrets-f9ba5e96-240a-4e13-9a30-6eccf16c21ad": Phase="Pending", Reason="", readiness=false. Elapsed: 8.018270399s May 20 22:08:31.742: INFO: Pod "pod-projected-secrets-f9ba5e96-240a-4e13-9a30-6eccf16c21ad": Phase="Pending", Reason="", readiness=false. Elapsed: 10.023684336s May 20 22:08:33.747: INFO: Pod "pod-projected-secrets-f9ba5e96-240a-4e13-9a30-6eccf16c21ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.028817873s STEP: Saw pod success May 20 22:08:33.747: INFO: Pod "pod-projected-secrets-f9ba5e96-240a-4e13-9a30-6eccf16c21ad" satisfied condition "Succeeded or Failed" May 20 22:08:33.750: INFO: Trying to get logs from node node2 pod pod-projected-secrets-f9ba5e96-240a-4e13-9a30-6eccf16c21ad container projected-secret-volume-test: STEP: delete the pod May 20 22:08:33.774: INFO: Waiting for pod pod-projected-secrets-f9ba5e96-240a-4e13-9a30-6eccf16c21ad to disappear May 20 22:08:33.776: INFO: Pod pod-projected-secrets-f9ba5e96-240a-4e13-9a30-6eccf16c21ad no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:08:33.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2419" for this suite. • [SLOW TEST:12.106 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":369,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:08:17.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 20 22:08:17.940: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 20 22:08:19.949: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681297, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681297, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681297, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681297, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 20 22:08:21.954: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681297, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681297, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681297, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681297, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 20 22:08:23.955: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681297, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681297, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681297, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681297, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 20 22:08:25.954: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681297, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681297, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681297, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681297, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 20 22:08:27.953: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681297, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681297, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681297, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681297, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 20 22:08:29.954: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681297, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681297, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681297, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681297, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 20 22:08:32.962: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook May 20 22:08:33.977: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:08:33.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3789" for this suite. STEP: Destroying namespace "webhook-3789-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:16.521 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":-1,"completed":34,"skipped":612,"failed":0} SSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:08:34.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 20 22:08:34.072: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-426 29c75d2a-46b8-48ad-890d-7a85ab1bdfe3 44243 0 2022-05-20 22:08:34 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2022-05-20 22:08:34 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 20 22:08:34.073: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-426 29c75d2a-46b8-48ad-890d-7a85ab1bdfe3 44244 0 2022-05-20 22:08:34 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2022-05-20 22:08:34 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:08:34.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-426" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":-1,"completed":35,"skipped":615,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:08:20.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating secret secrets-6127/secret-test-0f011c8d-2e5f-4799-8c81-69c8e999110d STEP: Creating a pod to test consume secrets May 20 22:08:20.903: INFO: Waiting up to 5m0s for pod "pod-configmaps-517b9f5d-3135-43a8-be60-c152c5b1f498" in namespace "secrets-6127" to be "Succeeded or Failed" May 20 22:08:20.905: INFO: Pod "pod-configmaps-517b9f5d-3135-43a8-be60-c152c5b1f498": Phase="Pending", Reason="", readiness=false. Elapsed: 1.987387ms May 20 22:08:22.909: INFO: Pod "pod-configmaps-517b9f5d-3135-43a8-be60-c152c5b1f498": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006022044s May 20 22:08:24.920: INFO: Pod "pod-configmaps-517b9f5d-3135-43a8-be60-c152c5b1f498": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016579593s May 20 22:08:26.926: INFO: Pod "pod-configmaps-517b9f5d-3135-43a8-be60-c152c5b1f498": Phase="Pending", Reason="", readiness=false. Elapsed: 6.022454151s May 20 22:08:28.932: INFO: Pod "pod-configmaps-517b9f5d-3135-43a8-be60-c152c5b1f498": Phase="Pending", Reason="", readiness=false. Elapsed: 8.028316934s May 20 22:08:30.935: INFO: Pod "pod-configmaps-517b9f5d-3135-43a8-be60-c152c5b1f498": Phase="Pending", Reason="", readiness=false. Elapsed: 10.031286188s May 20 22:08:32.938: INFO: Pod "pod-configmaps-517b9f5d-3135-43a8-be60-c152c5b1f498": Phase="Pending", Reason="", readiness=false. Elapsed: 12.034860895s May 20 22:08:34.942: INFO: Pod "pod-configmaps-517b9f5d-3135-43a8-be60-c152c5b1f498": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.038937578s STEP: Saw pod success May 20 22:08:34.942: INFO: Pod "pod-configmaps-517b9f5d-3135-43a8-be60-c152c5b1f498" satisfied condition "Succeeded or Failed" May 20 22:08:34.945: INFO: Trying to get logs from node node2 pod pod-configmaps-517b9f5d-3135-43a8-be60-c152c5b1f498 container env-test: STEP: delete the pod May 20 22:08:34.957: INFO: Waiting for pod pod-configmaps-517b9f5d-3135-43a8-be60-c152c5b1f498 to disappear May 20 22:08:34.959: INFO: Pod pod-configmaps-517b9f5d-3135-43a8-be60-c152c5b1f498 no longer exists [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:08:34.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6127" for this suite. • [SLOW TEST:14.100 seconds] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":191,"failed":0} SS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:08:16.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 20 22:08:22.393: INFO: Deleting pod "var-expansion-dfa3ebc3-2f6a-44f5-bd05-3ff1b95eee10" in namespace "var-expansion-3149" May 20 22:08:22.397: INFO: Wait up to 5m0s for pod "var-expansion-dfa3ebc3-2f6a-44f5-bd05-3ff1b95eee10" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:08:36.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3149" for this suite. • [SLOW TEST:20.064 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]","total":-1,"completed":29,"skipped":537,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:08:36.443: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:08:36.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6064" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":-1,"completed":30,"skipped":549,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:08:24.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:08:36.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9912" for this suite. • [SLOW TEST:12.060 seconds] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:79 should have an terminated reason [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":426,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:08:36.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Pods Set QOS Class /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:150 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:08:36.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7253" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":-1,"completed":20,"skipped":426,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:08:36.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check is all data is printed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 20 22:08:37.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5701 version' May 20 22:08:37.108: INFO: stderr: "" May 20 22:08:37.108: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"21\", GitVersion:\"v1.21.9\", GitCommit:\"b631974d68ac5045e076c86a5c66fba6f128dc72\", GitTreeState:\"clean\", BuildDate:\"2022-01-19T17:51:12Z\", GoVersion:\"go1.16.12\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"21\", GitVersion:\"v1.21.1\", GitCommit:\"5e58841cce77d4bc13713ad2b91fa0d961e69192\", GitTreeState:\"clean\", BuildDate:\"2021-05-12T14:12:29Z\", GoVersion:\"go1.16.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:08:37.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5701" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":-1,"completed":21,"skipped":434,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:08:34.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0644 on node default medium May 20 22:08:35.007: INFO: Waiting up to 5m0s for pod "pod-6a86fe42-8505-45c9-acef-d024608e80fc" in namespace "emptydir-9609" to be "Succeeded or Failed" May 20 22:08:35.009: INFO: Pod "pod-6a86fe42-8505-45c9-acef-d024608e80fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038369ms May 20 22:08:37.012: INFO: Pod "pod-6a86fe42-8505-45c9-acef-d024608e80fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005443185s May 20 22:08:39.017: INFO: Pod "pod-6a86fe42-8505-45c9-acef-d024608e80fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009829345s STEP: Saw pod success May 20 22:08:39.017: INFO: Pod "pod-6a86fe42-8505-45c9-acef-d024608e80fc" satisfied condition "Succeeded or Failed" May 20 22:08:39.019: INFO: Trying to get logs from node node1 pod pod-6a86fe42-8505-45c9-acef-d024608e80fc container test-container: STEP: delete the pod May 20 22:08:39.043: INFO: Waiting for pod pod-6a86fe42-8505-45c9-acef-d024608e80fc to disappear May 20 22:08:39.048: INFO: Pod pod-6a86fe42-8505-45c9-acef-d024608e80fc no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:08:39.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9609" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":193,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:08:34.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating the pod May 20 22:08:34.178: INFO: The status of Pod annotationupdate23efd733-e4ae-4cc4-a45a-c7b0ed1948a0 is Pending, waiting for it to be Running (with Ready = true) May 20 22:08:36.182: INFO: The status of Pod annotationupdate23efd733-e4ae-4cc4-a45a-c7b0ed1948a0 is Pending, waiting for it to be Running (with Ready = true) May 20 22:08:38.182: INFO: The status of Pod annotationupdate23efd733-e4ae-4cc4-a45a-c7b0ed1948a0 is Running (Ready = true) May 20 22:08:38.698: INFO: Successfully updated pod "annotationupdate23efd733-e4ae-4cc4-a45a-c7b0ed1948a0" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:08:40.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4675" for this suite. • [SLOW TEST:6.601 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":36,"skipped":639,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:08:31.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name cm-test-opt-del-817a86e3-230a-4078-9fcc-e3203683ce46 STEP: Creating configMap with name cm-test-opt-upd-a1473846-bb98-4b59-bd0c-e2b05914c54c STEP: Creating the pod May 20 22:08:31.738: INFO: The status of Pod pod-projected-configmaps-2a470983-3c1f-4433-a1ba-397f48c2903a is Pending, waiting for it to be Running (with Ready = true) May 20 22:08:33.743: INFO: The status of Pod pod-projected-configmaps-2a470983-3c1f-4433-a1ba-397f48c2903a is Pending, waiting for it to be Running (with Ready = true) May 20 22:08:35.742: INFO: The status of Pod pod-projected-configmaps-2a470983-3c1f-4433-a1ba-397f48c2903a is Pending, waiting for it to be Running (with Ready = true) May 20 22:08:37.742: INFO: The status of Pod pod-projected-configmaps-2a470983-3c1f-4433-a1ba-397f48c2903a is Running (Ready = true) STEP: Deleting configmap cm-test-opt-del-817a86e3-230a-4078-9fcc-e3203683ce46 STEP: Updating configmap cm-test-opt-upd-a1473846-bb98-4b59-bd0c-e2b05914c54c STEP: Creating configMap with name cm-test-opt-create-a2bb575f-7047-46e7-bbf5-8b3874908fe6 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:08:41.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-405" for this suite. • [SLOW TEST:10.196 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":282,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:08:40.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should run through the lifecycle of Pods and PodStatus [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Pod with a static label STEP: watching for Pod to be ready May 20 22:08:40.819: INFO: observed Pod pod-test in namespace pods-7852 in phase Pending with labels: map[test-pod-static:true] & conditions [] May 20 22:08:40.821: INFO: observed Pod pod-test in namespace pods-7852 in phase Pending with labels: map[test-pod-static:true] & conditions [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:08:40 +0000 UTC }] May 20 22:08:40.831: INFO: observed Pod pod-test in namespace pods-7852 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:08:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:08:40 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:08:40 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:08:40 +0000 UTC }] May 20 22:08:42.667: INFO: observed Pod pod-test in namespace pods-7852 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:08:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:08:40 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:08:40 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:08:40 +0000 UTC }] May 20 22:08:44.856: INFO: Found Pod pod-test in namespace pods-7852 in phase Running with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:08:40 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:08:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:08:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:08:40 +0000 UTC }] STEP: patching the Pod with a new Label and updated data May 20 22:08:44.866: INFO: observed event type ADDED STEP: getting the Pod and ensuring that it's patched STEP: getting the PodStatus STEP: replacing the Pod's status Ready condition to False STEP: check the Pod again to ensure its Ready conditions are False STEP: deleting the Pod via a Collection with a LabelSelector STEP: watching for the Pod to be deleted May 20 22:08:44.885: INFO: observed event type ADDED May 20 22:08:44.885: INFO: observed event type MODIFIED May 20 22:08:44.886: INFO: observed event type MODIFIED May 20 22:08:44.886: INFO: observed event type MODIFIED May 20 22:08:44.886: INFO: observed event type MODIFIED May 20 22:08:44.886: INFO: observed event type MODIFIED May 20 22:08:44.886: INFO: observed event type MODIFIED May 20 22:08:44.886: INFO: observed event type MODIFIED [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:08:44.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7852" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":-1,"completed":37,"skipped":659,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:08:37.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap configmap-3390/configmap-test-f1c4d9e9-95fc-41dc-b6b1-331d9382a325 STEP: Creating a pod to test consume configMaps May 20 22:08:37.178: INFO: Waiting up to 5m0s for pod "pod-configmaps-ed00937a-8174-466b-9f3a-d4a3a8d52ff8" in namespace "configmap-3390" to be "Succeeded or Failed" May 20 22:08:37.181: INFO: Pod "pod-configmaps-ed00937a-8174-466b-9f3a-d4a3a8d52ff8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102514ms May 20 22:08:39.184: INFO: Pod "pod-configmaps-ed00937a-8174-466b-9f3a-d4a3a8d52ff8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005198225s May 20 22:08:41.187: INFO: Pod "pod-configmaps-ed00937a-8174-466b-9f3a-d4a3a8d52ff8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008658201s May 20 22:08:43.190: INFO: Pod "pod-configmaps-ed00937a-8174-466b-9f3a-d4a3a8d52ff8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.011808253s May 20 22:08:45.194: INFO: Pod "pod-configmaps-ed00937a-8174-466b-9f3a-d4a3a8d52ff8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.015169112s STEP: Saw pod success May 20 22:08:45.194: INFO: Pod "pod-configmaps-ed00937a-8174-466b-9f3a-d4a3a8d52ff8" satisfied condition "Succeeded or Failed" May 20 22:08:45.196: INFO: Trying to get logs from node node2 pod pod-configmaps-ed00937a-8174-466b-9f3a-d4a3a8d52ff8 container env-test: STEP: delete the pod May 20 22:08:45.209: INFO: Waiting for pod pod-configmaps-ed00937a-8174-466b-9f3a-d4a3a8d52ff8 to disappear May 20 22:08:45.211: INFO: Pod pod-configmaps-ed00937a-8174-466b-9f3a-d4a3a8d52ff8 no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:08:45.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3390" for this suite. • [SLOW TEST:8.080 seconds] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be consumable via environment variable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":443,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:08:41.907: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin May 20 22:08:41.939: INFO: Waiting up to 5m0s for pod "downwardapi-volume-41f85113-3ecd-48c3-9329-a2552b1abc7d" in namespace "downward-api-5306" to be "Succeeded or Failed" May 20 22:08:41.942: INFO: Pod "downwardapi-volume-41f85113-3ecd-48c3-9329-a2552b1abc7d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.53661ms May 20 22:08:43.947: INFO: Pod "downwardapi-volume-41f85113-3ecd-48c3-9329-a2552b1abc7d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007726605s May 20 22:08:45.951: INFO: Pod "downwardapi-volume-41f85113-3ecd-48c3-9329-a2552b1abc7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011429655s STEP: Saw pod success May 20 22:08:45.951: INFO: Pod "downwardapi-volume-41f85113-3ecd-48c3-9329-a2552b1abc7d" satisfied condition "Succeeded or Failed" May 20 22:08:45.953: INFO: Trying to get logs from node node1 pod downwardapi-volume-41f85113-3ecd-48c3-9329-a2552b1abc7d container client-container: STEP: delete the pod May 20 22:08:45.965: INFO: Waiting for pod downwardapi-volume-41f85113-3ecd-48c3-9329-a2552b1abc7d to disappear May 20 22:08:45.967: INFO: Pod downwardapi-volume-41f85113-3ecd-48c3-9329-a2552b1abc7d no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:08:45.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5306" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":292,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:08:46.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if Kubernetes control plane services is included in cluster-info [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: validating cluster-info May 20 22:08:46.094: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8965 cluster-info' May 20 22:08:46.266: INFO: stderr: "" May 20 22:08:46.266: INFO: stdout: "\x1b[0;32mKubernetes control plane\x1b[0m is running at \x1b[0;33mhttps://10.10.190.202:6443\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:08:46.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8965" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance]","total":-1,"completed":20,"skipped":338,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:08:39.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 20 22:08:39.090: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:08:47.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7047" for this suite. • [SLOW TEST:8.179 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 custom resource defaulting for requests and from storage works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":-1,"completed":13,"skipped":196,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:08:14.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a service nodeport-service with the type=NodePort in namespace services-6091 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-6091 STEP: creating replication controller externalsvc in namespace services-6091 I0520 22:08:14.879364 24 runners.go:190] Created replication controller with name: externalsvc, namespace: services-6091, replica count: 2 I0520 22:08:17.931486 24 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0520 22:08:20.931884 24 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0520 22:08:23.933559 24 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0520 22:08:26.934697 24 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0520 22:08:29.935141 24 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0520 22:08:32.936074 24 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName May 20 22:08:32.950: INFO: Creating new exec pod May 20 22:08:36.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6091 exec execpodtbmvm -- /bin/sh -x -c nslookup nodeport-service.services-6091.svc.cluster.local' May 20 22:08:37.244: INFO: stderr: "+ nslookup nodeport-service.services-6091.svc.cluster.local\n" May 20 22:08:37.244: INFO: stdout: "Server:\t\t10.233.0.3\nAddress:\t10.233.0.3#53\n\nnodeport-service.services-6091.svc.cluster.local\tcanonical name = externalsvc.services-6091.svc.cluster.local.\nName:\texternalsvc.services-6091.svc.cluster.local\nAddress: 10.233.44.178\n\n" STEP: deleting ReplicationController externalsvc in namespace services-6091, will wait for the garbage collector to delete the pods May 20 22:08:37.302: INFO: Deleting ReplicationController externalsvc took: 4.130751ms May 20 22:08:37.403: INFO: Terminating ReplicationController externalsvc pods took: 101.027924ms May 20 22:08:48.513: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:08:48.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6091" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:33.688 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":-1,"completed":25,"skipped":396,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:08:44.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4665.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-4665.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4665.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4665.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-4665.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4665.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 20 22:08:51.019: INFO: DNS probes using dns-4665/dns-test-1966baed-cc3b-4c96-bb71-1258192db806 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:08:51.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4665" for this suite. • [SLOW TEST:6.075 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":-1,"completed":38,"skipped":692,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:08:46.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0644 on tmpfs May 20 22:08:46.334: INFO: Waiting up to 5m0s for pod "pod-244f981f-5f82-43b4-85ba-4a72969b2c94" in namespace "emptydir-6332" to be "Succeeded or Failed" May 20 22:08:46.336: INFO: Pod "pod-244f981f-5f82-43b4-85ba-4a72969b2c94": Phase="Pending", Reason="", readiness=false. Elapsed: 1.94725ms May 20 22:08:48.340: INFO: Pod "pod-244f981f-5f82-43b4-85ba-4a72969b2c94": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005674352s May 20 22:08:50.346: INFO: Pod "pod-244f981f-5f82-43b4-85ba-4a72969b2c94": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011899468s May 20 22:08:52.354: INFO: Pod "pod-244f981f-5f82-43b4-85ba-4a72969b2c94": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.019251228s STEP: Saw pod success May 20 22:08:52.354: INFO: Pod "pod-244f981f-5f82-43b4-85ba-4a72969b2c94" satisfied condition "Succeeded or Failed" May 20 22:08:52.355: INFO: Trying to get logs from node node1 pod pod-244f981f-5f82-43b4-85ba-4a72969b2c94 container test-container: STEP: delete the pod May 20 22:08:52.368: INFO: Waiting for pod pod-244f981f-5f82-43b4-85ba-4a72969b2c94 to disappear May 20 22:08:52.370: INFO: Pod pod-244f981f-5f82-43b4-85ba-4a72969b2c94 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:08:52.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6332" for this suite. • [SLOW TEST:6.086 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":345,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:08:52.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [BeforeEach] Listing PodDisruptionBudgets for all namespaces /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:08:52.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption-2 STEP: Waiting for a default service account to be provisioned in namespace [It] should list and delete a collection of PodDisruptionBudgets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Waiting for the pdb to be processed STEP: Waiting for the pdb to be processed STEP: Waiting for the pdb to be processed STEP: listing a collection of PDBs across all namespaces STEP: listing a collection of PDBs in namespace disruption-2607 STEP: deleting a collection of PDBs STEP: Waiting for the PDB collection to be deleted [AfterEach] Listing PodDisruptionBudgets for all namespaces /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:08:54.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-2-8458" for this suite. [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:08:54.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-2607" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]","total":-1,"completed":22,"skipped":357,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:08:33.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:08:55.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-3296" for this suite. • [SLOW TEST:22.039 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":-1,"completed":17,"skipped":382,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:08:48.569: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 20 22:08:48.976: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 20 22:08:50.984: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681328, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681328, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681328, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681328, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 20 22:08:52.988: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681328, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681328, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681328, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681328, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 20 22:08:54.988: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681328, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681328, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681328, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681328, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 20 22:08:57.997: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:08:58.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7724" for this suite. STEP: Destroying namespace "webhook-7724-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.603 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":26,"skipped":414,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:08:47.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 20 22:08:47.342: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 20 22:08:47.348: INFO: Pod name sample-pod: Found 0 pods out of 1 May 20 22:08:52.352: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 20 22:08:54.360: INFO: Creating deployment "test-rolling-update-deployment" May 20 22:08:54.363: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 20 22:08:54.368: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 20 22:08:56.375: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 20 22:08:56.378: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681334, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681334, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681334, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681334, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-585b757574\" is progressing."}}, CollisionCount:(*int32)(nil)} May 20 22:08:58.381: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 May 20 22:08:58.388: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-4453 fd5d9bfb-e034-4ce1-9481-ae12ea421c1d 45174 1 2022-05-20 22:08:54 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2022-05-20 22:08:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2022-05-20 22:08:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0043ca278 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2022-05-20 22:08:54 +0000 UTC,LastTransitionTime:2022-05-20 22:08:54 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-585b757574" has successfully progressed.,LastUpdateTime:2022-05-20 22:08:57 +0000 UTC,LastTransitionTime:2022-05-20 22:08:54 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 20 22:08:58.391: INFO: New ReplicaSet "test-rolling-update-deployment-585b757574" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-585b757574 deployment-4453 30e8528e-a20d-4551-8690-8e19ff6c111f 45162 1 2022-05-20 22:08:54 +0000 UTC map[name:sample-pod pod-template-hash:585b757574] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment fd5d9bfb-e034-4ce1-9481-ae12ea421c1d 0xc0043ca737 0xc0043ca738}] [] [{kube-controller-manager Update apps/v1 2022-05-20 22:08:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fd5d9bfb-e034-4ce1-9481-ae12ea421c1d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 585b757574,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:585b757574] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0043ca7c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 20 22:08:58.391: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 20 22:08:58.391: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-4453 0270ab1c-1c4c-4f9a-9b59-55fb5f16cca2 45173 2 2022-05-20 22:08:47 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment fd5d9bfb-e034-4ce1-9481-ae12ea421c1d 0xc0043ca617 0xc0043ca618}] [] [{e2e.test Update apps/v1 2022-05-20 22:08:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2022-05-20 22:08:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fd5d9bfb-e034-4ce1-9481-ae12ea421c1d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0043ca6b8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 20 22:08:58.394: INFO: Pod "test-rolling-update-deployment-585b757574-h7m99" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-585b757574-h7m99 test-rolling-update-deployment-585b757574- deployment-4453 ae12730a-6355-4e94-972b-82d273bc9770 45161 0 2022-05-20 22:08:54 +0000 UTC map[name:sample-pod pod-template-hash:585b757574] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.92" ], "mac": "9e:07:a9:c3:65:67", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.92" ], "mac": "9e:07:a9:c3:65:67", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-rolling-update-deployment-585b757574 30e8528e-a20d-4551-8690-8e19ff6c111f 0xc0043cabdf 0xc0043cabf0}] [] [{kube-controller-manager Update v1 2022-05-20 22:08:54 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"30e8528e-a20d-4551-8690-8e19ff6c111f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-05-20 22:08:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-05-20 22:08:57 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.92\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-n84n4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n84n4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:08:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:08:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:08:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:08:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.3.92,StartTime:2022-05-20 22:08:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-05-20 22:08:56 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:docker://8331ff5e736079706a393976e1226a156e39c14667684c68a0e1faca218e5ddc,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.92,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:08:58.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4453" for this suite. • [SLOW TEST:11.082 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":14,"skipped":228,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:08:55.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 20 22:08:55.909: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-423b40b1-d2fb-4d1a-8ccb-e615929d2e6c" in namespace "security-context-test-2389" to be "Succeeded or Failed" May 20 22:08:55.911: INFO: Pod "busybox-privileged-false-423b40b1-d2fb-4d1a-8ccb-e615929d2e6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012922ms May 20 22:08:57.914: INFO: Pod "busybox-privileged-false-423b40b1-d2fb-4d1a-8ccb-e615929d2e6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00551594s May 20 22:08:59.919: INFO: Pod "busybox-privileged-false-423b40b1-d2fb-4d1a-8ccb-e615929d2e6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010434022s May 20 22:08:59.919: INFO: Pod "busybox-privileged-false-423b40b1-d2fb-4d1a-8ccb-e615929d2e6c" satisfied condition "Succeeded or Failed" May 20 22:08:59.939: INFO: Got logs for pod "busybox-privileged-false-423b40b1-d2fb-4d1a-8ccb-e615929d2e6c": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:08:59.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2389" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":390,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SS ------------------------------ [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:08:59.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a test event STEP: listing events in all namespaces STEP: listing events in test namespace STEP: listing events with field selection filtering on source STEP: listing events with field selection filtering on reportingController STEP: getting the test event STEP: patching the test event STEP: getting the test event STEP: updating the test event STEP: getting the test event STEP: deleting the test event STEP: listing events in all namespaces STEP: listing events in test namespace [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:09:00.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-7021" for this suite. • ------------------------------ {"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":19,"skipped":392,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:08:58.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 20 22:08:58.299: INFO: The status of Pod pod-secrets-a9e22b2a-fe2f-4fcb-87ef-ab0f65a01a0d is Pending, waiting for it to be Running (with Ready = true) May 20 22:09:00.302: INFO: The status of Pod pod-secrets-a9e22b2a-fe2f-4fcb-87ef-ab0f65a01a0d is Pending, waiting for it to be Running (with Ready = true) May 20 22:09:02.303: INFO: The status of Pod pod-secrets-a9e22b2a-fe2f-4fcb-87ef-ab0f65a01a0d is Running (Ready = true) STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:09:02.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-8424" for this suite. • ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:08:36.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation May 20 22:08:36.578: INFO: >>> kubeConfig: /root/.kube/config May 20 22:08:45.208: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:09:04.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9938" for this suite. • [SLOW TEST:27.501 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":-1,"completed":31,"skipped":572,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:08:58.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin May 20 22:08:58.462: INFO: Waiting up to 5m0s for pod "downwardapi-volume-86b4580a-ad46-4f53-a3ef-8d3ab459d520" in namespace "projected-615" to be "Succeeded or Failed" May 20 22:08:58.465: INFO: Pod "downwardapi-volume-86b4580a-ad46-4f53-a3ef-8d3ab459d520": Phase="Pending", Reason="", readiness=false. Elapsed: 2.814825ms May 20 22:09:00.469: INFO: Pod "downwardapi-volume-86b4580a-ad46-4f53-a3ef-8d3ab459d520": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006120779s May 20 22:09:02.473: INFO: Pod "downwardapi-volume-86b4580a-ad46-4f53-a3ef-8d3ab459d520": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010241236s May 20 22:09:04.477: INFO: Pod "downwardapi-volume-86b4580a-ad46-4f53-a3ef-8d3ab459d520": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014646495s STEP: Saw pod success May 20 22:09:04.477: INFO: Pod "downwardapi-volume-86b4580a-ad46-4f53-a3ef-8d3ab459d520" satisfied condition "Succeeded or Failed" May 20 22:09:04.480: INFO: Trying to get logs from node node2 pod downwardapi-volume-86b4580a-ad46-4f53-a3ef-8d3ab459d520 container client-container: STEP: delete the pod May 20 22:09:04.493: INFO: Waiting for pod downwardapi-volume-86b4580a-ad46-4f53-a3ef-8d3ab459d520 to disappear May 20 22:09:04.495: INFO: Pod downwardapi-volume-86b4580a-ad46-4f53-a3ef-8d3ab459d520 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:09:04.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-615" for this suite. • [SLOW TEST:6.083 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":238,"failed":0} SSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":-1,"completed":27,"skipped":456,"failed":0} [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:09:02.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test substitution in volume subpath May 20 22:09:02.362: INFO: Waiting up to 5m0s for pod "var-expansion-5eff12af-1708-4f71-b2aa-e4b71e6cea65" in namespace "var-expansion-9362" to be "Succeeded or Failed" May 20 22:09:02.364: INFO: Pod "var-expansion-5eff12af-1708-4f71-b2aa-e4b71e6cea65": Phase="Pending", Reason="", readiness=false. Elapsed: 1.97926ms May 20 22:09:04.367: INFO: Pod "var-expansion-5eff12af-1708-4f71-b2aa-e4b71e6cea65": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004526148s May 20 22:09:06.371: INFO: Pod "var-expansion-5eff12af-1708-4f71-b2aa-e4b71e6cea65": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00839971s STEP: Saw pod success May 20 22:09:06.371: INFO: Pod "var-expansion-5eff12af-1708-4f71-b2aa-e4b71e6cea65" satisfied condition "Succeeded or Failed" May 20 22:09:06.375: INFO: Trying to get logs from node node2 pod var-expansion-5eff12af-1708-4f71-b2aa-e4b71e6cea65 container dapi-container: STEP: delete the pod May 20 22:09:06.387: INFO: Waiting for pod var-expansion-5eff12af-1708-4f71-b2aa-e4b71e6cea65 to disappear May 20 22:09:06.389: INFO: Pod var-expansion-5eff12af-1708-4f71-b2aa-e4b71e6cea65 no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:09:06.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9362" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]","total":-1,"completed":28,"skipped":456,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:08:54.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. May 20 22:08:54.565: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) May 20 22:08:56.568: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) May 20 22:08:58.568: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the pod with lifecycle hook May 20 22:08:58.581: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) May 20 22:09:00.584: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) May 20 22:09:02.585: INFO: The status of Pod pod-with-prestop-exec-hook is Running (Ready = true) STEP: delete the pod with lifecycle hook May 20 22:09:02.594: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 20 22:09:02.596: INFO: Pod pod-with-prestop-exec-hook still exists May 20 22:09:04.597: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 20 22:09:04.600: INFO: Pod pod-with-prestop-exec-hook still exists May 20 22:09:06.596: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 20 22:09:06.599: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:09:06.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-690" for this suite. • [SLOW TEST:12.082 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":370,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:09:04.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] Deployment should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 20 22:09:04.103: INFO: Creating simple deployment test-new-deployment May 20 22:09:04.113: INFO: deployment "test-new-deployment" doesn't have the required revision set May 20 22:09:06.123: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681344, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681344, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681344, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681344, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-847dcfb7fb\" is progressing."}}, CollisionCount:(*int32)(nil)} May 20 22:09:08.128: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681344, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681344, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681344, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681344, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-847dcfb7fb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the deployment Spec.Replicas was modified STEP: Patch a scale subresource [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 May 20 22:09:10.148: INFO: Deployment "test-new-deployment": &Deployment{ObjectMeta:{test-new-deployment deployment-2310 0d3a373f-c42c-4605-a80a-05f489dd4598 45592 3 2022-05-20 22:09:04 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2022-05-20 22:09:04 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2022-05-20 22:09:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc006006e48 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2022-05-20 22:09:08 +0000 UTC,LastTransitionTime:2022-05-20 22:09:08 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-new-deployment-847dcfb7fb" has successfully progressed.,LastUpdateTime:2022-05-20 22:09:08 +0000 UTC,LastTransitionTime:2022-05-20 22:09:04 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 20 22:09:10.151: INFO: New ReplicaSet "test-new-deployment-847dcfb7fb" of Deployment "test-new-deployment": &ReplicaSet{ObjectMeta:{test-new-deployment-847dcfb7fb deployment-2310 b03522e9-46fa-4e2c-8219-5acc71bae7de 45595 3 2022-05-20 22:09:04 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[deployment.kubernetes.io/desired-replicas:4 deployment.kubernetes.io/max-replicas:5 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-new-deployment 0d3a373f-c42c-4605-a80a-05f489dd4598 0xc005e911f7 0xc005e911f8}] [] [{kube-controller-manager Update apps/v1 2022-05-20 22:09:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0d3a373f-c42c-4605-a80a-05f489dd4598\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 847dcfb7fb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005e912c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 20 22:09:10.155: INFO: Pod "test-new-deployment-847dcfb7fb-4b8ms" is not available: &Pod{ObjectMeta:{test-new-deployment-847dcfb7fb-4b8ms test-new-deployment-847dcfb7fb- deployment-2310 98d5951e-f02d-4e2d-9c0a-1a4c296469a0 45597 0 2022-05-20 22:09:10 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-new-deployment-847dcfb7fb b03522e9-46fa-4e2c-8219-5acc71bae7de 0xc005d49b1f 0xc005d49b30}] [] [{kube-controller-manager Update v1 2022-05-20 22:09:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b03522e9-46fa-4e2c-8219-5acc71bae7de\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-x6gbd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x6gbd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:09:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 22:09:10.155: INFO: Pod "test-new-deployment-847dcfb7fb-vzzqk" is available: &Pod{ObjectMeta:{test-new-deployment-847dcfb7fb-vzzqk test-new-deployment-847dcfb7fb- deployment-2310 a0bf9753-19ad-4cef-9e32-8a14406dd758 45546 0 2022-05-20 22:09:04 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.96" ], "mac": "ee:9f:bb:16:26:58", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.96" ], "mac": "ee:9f:bb:16:26:58", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-new-deployment-847dcfb7fb b03522e9-46fa-4e2c-8219-5acc71bae7de 0xc005d49cdf 0xc005d49cf0}] [] [{kube-controller-manager Update v1 2022-05-20 22:09:04 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b03522e9-46fa-4e2c-8219-5acc71bae7de\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-05-20 22:09:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-05-20 22:09:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.96\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-b9rh9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b9rh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:09:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:09:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:09:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:09:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.3.96,StartTime:2022-05-20 22:09:04 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-05-20 22:09:08 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://224616d704ddb4e3c265396b6c715d3a32c17f7c4377d7248386ca746e2c29bd,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.96,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:09:10.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2310" for this suite. • [SLOW TEST:6.083 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Deployment should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:09:06.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser May 20 22:09:06.550: INFO: Waiting up to 5m0s for pod "security-context-f3f043dc-138c-4e47-9d68-acdbe9d8994f" in namespace "security-context-1942" to be "Succeeded or Failed" May 20 22:09:06.552: INFO: Pod "security-context-f3f043dc-138c-4e47-9d68-acdbe9d8994f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028536ms May 20 22:09:08.555: INFO: Pod "security-context-f3f043dc-138c-4e47-9d68-acdbe9d8994f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005830998s May 20 22:09:10.559: INFO: Pod "security-context-f3f043dc-138c-4e47-9d68-acdbe9d8994f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009623759s STEP: Saw pod success May 20 22:09:10.559: INFO: Pod "security-context-f3f043dc-138c-4e47-9d68-acdbe9d8994f" satisfied condition "Succeeded or Failed" May 20 22:09:10.562: INFO: Trying to get logs from node node2 pod security-context-f3f043dc-138c-4e47-9d68-acdbe9d8994f container test-container: STEP: delete the pod May 20 22:09:10.575: INFO: Waiting for pod security-context-f3f043dc-138c-4e47-9d68-acdbe9d8994f to disappear May 20 22:09:10.577: INFO: Pod security-context-f3f043dc-138c-4e47-9d68-acdbe9d8994f no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:09:10.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-1942" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":29,"skipped":511,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:09:04.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 20 22:09:04.584: INFO: The status of Pod server-envvars-ee3f2feb-315a-4ee8-955c-8ba63b986bf8 is Pending, waiting for it to be Running (with Ready = true) May 20 22:09:06.587: INFO: The status of Pod server-envvars-ee3f2feb-315a-4ee8-955c-8ba63b986bf8 is Pending, waiting for it to be Running (with Ready = true) May 20 22:09:08.588: INFO: The status of Pod server-envvars-ee3f2feb-315a-4ee8-955c-8ba63b986bf8 is Pending, waiting for it to be Running (with Ready = true) May 20 22:09:10.586: INFO: The status of Pod server-envvars-ee3f2feb-315a-4ee8-955c-8ba63b986bf8 is Running (Ready = true) May 20 22:09:10.604: INFO: Waiting up to 5m0s for pod "client-envvars-48c5031a-925f-4451-bec3-8cc03c877c11" in namespace "pods-5186" to be "Succeeded or Failed" May 20 22:09:10.607: INFO: Pod "client-envvars-48c5031a-925f-4451-bec3-8cc03c877c11": Phase="Pending", Reason="", readiness=false. Elapsed: 2.422625ms May 20 22:09:12.609: INFO: Pod "client-envvars-48c5031a-925f-4451-bec3-8cc03c877c11": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004788936s May 20 22:09:14.614: INFO: Pod "client-envvars-48c5031a-925f-4451-bec3-8cc03c877c11": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009281865s May 20 22:09:16.617: INFO: Pod "client-envvars-48c5031a-925f-4451-bec3-8cc03c877c11": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013134943s STEP: Saw pod success May 20 22:09:16.618: INFO: Pod "client-envvars-48c5031a-925f-4451-bec3-8cc03c877c11" satisfied condition "Succeeded or Failed" May 20 22:09:16.620: INFO: Trying to get logs from node node1 pod client-envvars-48c5031a-925f-4451-bec3-8cc03c877c11 container env3cont: STEP: delete the pod May 20 22:09:16.799: INFO: Waiting for pod client-envvars-48c5031a-925f-4451-bec3-8cc03c877c11 to disappear May 20 22:09:16.802: INFO: Pod client-envvars-48c5031a-925f-4451-bec3-8cc03c877c11 no longer exists [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:09:16.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5186" for this suite. • [SLOW TEST:12.281 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":248,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:09:10.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-upd-adde7a98-3337-4223-9e0b-08a9b81ffd95 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:09:18.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7100" for this suite. • [SLOW TEST:8.069 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":524,"failed":0} SS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:08:51.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Performing setup for networking test in namespace pod-network-test-3084 STEP: creating a selector STEP: Creating the service pods in kubernetes May 20 22:08:51.139: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 20 22:08:51.168: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 20 22:08:53.171: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 20 22:08:55.172: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 22:08:57.172: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 22:08:59.173: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 22:09:01.172: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 22:09:03.173: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 22:09:05.173: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 22:09:07.173: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 22:09:09.173: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 22:09:11.171: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 22:09:13.172: INFO: The status of Pod netserver-0 is Running (Ready = true) May 20 22:09:13.178: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 20 22:09:19.203: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 May 20 22:09:19.203: INFO: Breadth first check of 10.244.4.37 on host 10.10.190.207... May 20 22:09:19.206: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.47:9080/dial?request=hostname&protocol=http&host=10.244.4.37&port=8080&tries=1'] Namespace:pod-network-test-3084 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 22:09:19.206: INFO: >>> kubeConfig: /root/.kube/config May 20 22:09:19.301: INFO: Waiting for responses: map[] May 20 22:09:19.301: INFO: reached 10.244.4.37 after 0/1 tries May 20 22:09:19.301: INFO: Breadth first check of 10.244.3.91 on host 10.10.190.208... May 20 22:09:19.304: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.47:9080/dial?request=hostname&protocol=http&host=10.244.3.91&port=8080&tries=1'] Namespace:pod-network-test-3084 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 22:09:19.304: INFO: >>> kubeConfig: /root/.kube/config May 20 22:09:19.398: INFO: Waiting for responses: map[] May 20 22:09:19.398: INFO: reached 10.244.3.91 after 0/1 tries May 20 22:09:19.398: INFO: Going to retry 0 out of 2 pods.... [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:09:19.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3084" for this suite. • [SLOW TEST:28.290 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":-1,"completed":39,"skipped":732,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:09:16.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount projected service account token [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test service account token: May 20 22:09:16.956: INFO: Waiting up to 5m0s for pod "test-pod-fcfca3d2-993f-41b4-a9df-583359f9c386" in namespace "svcaccounts-1840" to be "Succeeded or Failed" May 20 22:09:16.958: INFO: Pod "test-pod-fcfca3d2-993f-41b4-a9df-583359f9c386": Phase="Pending", Reason="", readiness=false. Elapsed: 2.223755ms May 20 22:09:18.962: INFO: Pod "test-pod-fcfca3d2-993f-41b4-a9df-583359f9c386": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005949201s May 20 22:09:20.966: INFO: Pod "test-pod-fcfca3d2-993f-41b4-a9df-583359f9c386": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01015684s STEP: Saw pod success May 20 22:09:20.966: INFO: Pod "test-pod-fcfca3d2-993f-41b4-a9df-583359f9c386" satisfied condition "Succeeded or Failed" May 20 22:09:20.969: INFO: Trying to get logs from node node2 pod test-pod-fcfca3d2-993f-41b4-a9df-583359f9c386 container agnhost-container: STEP: delete the pod May 20 22:09:20.982: INFO: Waiting for pod test-pod-fcfca3d2-993f-41b4-a9df-583359f9c386 to disappear May 20 22:09:20.984: INFO: Pod test-pod-fcfca3d2-993f-41b4-a9df-583359f9c386 no longer exists [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:09:20.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-1840" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":-1,"completed":17,"skipped":295,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:09:19.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin May 20 22:09:19.497: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bf6acbe0-a18c-46fd-9c9c-7dca68e6042e" in namespace "projected-7786" to be "Succeeded or Failed" May 20 22:09:19.499: INFO: Pod "downwardapi-volume-bf6acbe0-a18c-46fd-9c9c-7dca68e6042e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086672ms May 20 22:09:21.502: INFO: Pod "downwardapi-volume-bf6acbe0-a18c-46fd-9c9c-7dca68e6042e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005459498s May 20 22:09:23.509: INFO: Pod "downwardapi-volume-bf6acbe0-a18c-46fd-9c9c-7dca68e6042e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011783681s STEP: Saw pod success May 20 22:09:23.509: INFO: Pod "downwardapi-volume-bf6acbe0-a18c-46fd-9c9c-7dca68e6042e" satisfied condition "Succeeded or Failed" May 20 22:09:23.512: INFO: Trying to get logs from node node1 pod downwardapi-volume-bf6acbe0-a18c-46fd-9c9c-7dca68e6042e container client-container: STEP: delete the pod May 20 22:09:23.525: INFO: Waiting for pod downwardapi-volume-bf6acbe0-a18c-46fd-9c9c-7dca68e6042e to disappear May 20 22:09:23.528: INFO: Pod downwardapi-volume-bf6acbe0-a18c-46fd-9c9c-7dca68e6042e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:09:23.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7786" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":40,"skipped":753,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:09:23.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... May 20 22:09:23.588: INFO: Created pod &Pod{ObjectMeta:{test-dns-nameservers dns-2442 767692bd-e7e0-48c9-9b94-a7656526ce07 45911 0 2022-05-20 22:09:23 +0000 UTC map[] map[kubernetes.io/psp:collectd] [] [] [{e2e.test Update v1 2022-05-20 22:09:23 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-ww8dl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ww8dl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 22:09:23.591: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) May 20 22:09:25.595: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) May 20 22:09:27.595: INFO: The status of Pod test-dns-nameservers is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... May 20 22:09:27.595: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-2442 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 22:09:27.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Verifying customized DNS server is configured on pod... May 20 22:09:27.684: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-2442 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 22:09:27.684: INFO: >>> kubeConfig: /root/.kube/config May 20 22:09:27.785: INFO: Deleting pod test-dns-nameservers... [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:09:27.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2442" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":-1,"completed":41,"skipped":759,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:09:18.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods May 20 22:09:25.234: INFO: Successfully updated pod "adopt-release-8bpsw" STEP: Checking that the Job readopts the Pod May 20 22:09:25.234: INFO: Waiting up to 15m0s for pod "adopt-release-8bpsw" in namespace "job-1150" to be "adopted" May 20 22:09:25.237: INFO: Pod "adopt-release-8bpsw": Phase="Running", Reason="", readiness=true. Elapsed: 2.505519ms May 20 22:09:27.242: INFO: Pod "adopt-release-8bpsw": Phase="Running", Reason="", readiness=true. Elapsed: 2.007373757s May 20 22:09:27.242: INFO: Pod "adopt-release-8bpsw" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod May 20 22:09:27.752: INFO: Successfully updated pod "adopt-release-8bpsw" STEP: Checking that the Job releases the Pod May 20 22:09:27.752: INFO: Waiting up to 15m0s for pod "adopt-release-8bpsw" in namespace "job-1150" to be "released" May 20 22:09:27.755: INFO: Pod "adopt-release-8bpsw": Phase="Running", Reason="", readiness=true. Elapsed: 2.098988ms May 20 22:09:29.759: INFO: Pod "adopt-release-8bpsw": Phase="Running", Reason="", readiness=true. Elapsed: 2.006136848s May 20 22:09:29.759: INFO: Pod "adopt-release-8bpsw" satisfied condition "released" [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:09:29.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1150" for this suite. • [SLOW TEST:11.074 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:09:06.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-downwardapi-m5v5 STEP: Creating a pod to test atomic-volume-subpath May 20 22:09:06.671: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-m5v5" in namespace "subpath-870" to be "Succeeded or Failed" May 20 22:09:06.673: INFO: Pod "pod-subpath-test-downwardapi-m5v5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.28387ms May 20 22:09:08.679: INFO: Pod "pod-subpath-test-downwardapi-m5v5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007714219s May 20 22:09:10.683: INFO: Pod "pod-subpath-test-downwardapi-m5v5": Phase="Running", Reason="", readiness=true. Elapsed: 4.011860269s May 20 22:09:12.688: INFO: Pod "pod-subpath-test-downwardapi-m5v5": Phase="Running", Reason="", readiness=true. Elapsed: 6.017410057s May 20 22:09:14.694: INFO: Pod "pod-subpath-test-downwardapi-m5v5": Phase="Running", Reason="", readiness=true. Elapsed: 8.023284038s May 20 22:09:16.698: INFO: Pod "pod-subpath-test-downwardapi-m5v5": Phase="Running", Reason="", readiness=true. Elapsed: 10.026593656s May 20 22:09:18.703: INFO: Pod "pod-subpath-test-downwardapi-m5v5": Phase="Running", Reason="", readiness=true. Elapsed: 12.031691146s May 20 22:09:20.707: INFO: Pod "pod-subpath-test-downwardapi-m5v5": Phase="Running", Reason="", readiness=true. Elapsed: 14.036316079s May 20 22:09:22.711: INFO: Pod "pod-subpath-test-downwardapi-m5v5": Phase="Running", Reason="", readiness=true. Elapsed: 16.040230478s May 20 22:09:24.720: INFO: Pod "pod-subpath-test-downwardapi-m5v5": Phase="Running", Reason="", readiness=true. Elapsed: 18.049196338s May 20 22:09:26.725: INFO: Pod "pod-subpath-test-downwardapi-m5v5": Phase="Running", Reason="", readiness=true. Elapsed: 20.054041003s May 20 22:09:28.730: INFO: Pod "pod-subpath-test-downwardapi-m5v5": Phase="Running", Reason="", readiness=true. Elapsed: 22.058724131s May 20 22:09:30.736: INFO: Pod "pod-subpath-test-downwardapi-m5v5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.064489888s STEP: Saw pod success May 20 22:09:30.736: INFO: Pod "pod-subpath-test-downwardapi-m5v5" satisfied condition "Succeeded or Failed" May 20 22:09:30.738: INFO: Trying to get logs from node node1 pod pod-subpath-test-downwardapi-m5v5 container test-container-subpath-downwardapi-m5v5: STEP: delete the pod May 20 22:09:30.751: INFO: Waiting for pod pod-subpath-test-downwardapi-m5v5 to disappear May 20 22:09:30.752: INFO: Pod pod-subpath-test-downwardapi-m5v5 no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-m5v5 May 20 22:09:30.752: INFO: Deleting pod "pod-subpath-test-downwardapi-m5v5" in namespace "subpath-870" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:09:30.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-870" for this suite. • [SLOW TEST:24.135 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":-1,"completed":24,"skipped":376,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:09:21.005: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:09:37.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3861" for this suite. • [SLOW TEST:16.114 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":-1,"completed":18,"skipped":299,"failed":0} SSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:09:37.128: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars May 20 22:09:37.161: INFO: Waiting up to 5m0s for pod "downward-api-a4e30085-4813-41a1-b15d-fad516f5bded" in namespace "downward-api-6664" to be "Succeeded or Failed" May 20 22:09:37.168: INFO: Pod "downward-api-a4e30085-4813-41a1-b15d-fad516f5bded": Phase="Pending", Reason="", readiness=false. Elapsed: 6.799574ms May 20 22:09:39.172: INFO: Pod "downward-api-a4e30085-4813-41a1-b15d-fad516f5bded": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010898822s May 20 22:09:41.175: INFO: Pod "downward-api-a4e30085-4813-41a1-b15d-fad516f5bded": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013901991s STEP: Saw pod success May 20 22:09:41.175: INFO: Pod "downward-api-a4e30085-4813-41a1-b15d-fad516f5bded" satisfied condition "Succeeded or Failed" May 20 22:09:41.178: INFO: Trying to get logs from node node2 pod downward-api-a4e30085-4813-41a1-b15d-fad516f5bded container dapi-container: STEP: delete the pod May 20 22:09:41.189: INFO: Waiting for pod downward-api-a4e30085-4813-41a1-b15d-fad516f5bded to disappear May 20 22:09:41.191: INFO: Pod downward-api-a4e30085-4813-41a1-b15d-fad516f5bded no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:09:41.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6664" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":302,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:09:41.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-map-252be745-ab30-4e3f-8b3b-2aba8f648bad STEP: Creating a pod to test consume configMaps May 20 22:09:41.256: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7e11ab20-bbad-4725-a6a2-8d780ac7902b" in namespace "projected-5294" to be "Succeeded or Failed" May 20 22:09:41.260: INFO: Pod "pod-projected-configmaps-7e11ab20-bbad-4725-a6a2-8d780ac7902b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.242183ms May 20 22:09:43.264: INFO: Pod "pod-projected-configmaps-7e11ab20-bbad-4725-a6a2-8d780ac7902b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007607226s May 20 22:09:45.268: INFO: Pod "pod-projected-configmaps-7e11ab20-bbad-4725-a6a2-8d780ac7902b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011753686s STEP: Saw pod success May 20 22:09:45.268: INFO: Pod "pod-projected-configmaps-7e11ab20-bbad-4725-a6a2-8d780ac7902b" satisfied condition "Succeeded or Failed" May 20 22:09:45.270: INFO: Trying to get logs from node node1 pod pod-projected-configmaps-7e11ab20-bbad-4725-a6a2-8d780ac7902b container agnhost-container: STEP: delete the pod May 20 22:09:45.285: INFO: Waiting for pod pod-projected-configmaps-7e11ab20-bbad-4725-a6a2-8d780ac7902b to disappear May 20 22:09:45.287: INFO: Pod pod-projected-configmaps-7e11ab20-bbad-4725-a6a2-8d780ac7902b no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:09:45.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5294" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":309,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:09:45.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test override arguments May 20 22:09:45.387: INFO: Waiting up to 5m0s for pod "client-containers-608080b0-0712-4b24-aa4c-e7000e5a9568" in namespace "containers-9007" to be "Succeeded or Failed" May 20 22:09:45.390: INFO: Pod "client-containers-608080b0-0712-4b24-aa4c-e7000e5a9568": Phase="Pending", Reason="", readiness=false. Elapsed: 2.709399ms May 20 22:09:47.393: INFO: Pod "client-containers-608080b0-0712-4b24-aa4c-e7000e5a9568": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00566818s May 20 22:09:49.397: INFO: Pod "client-containers-608080b0-0712-4b24-aa4c-e7000e5a9568": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01049164s STEP: Saw pod success May 20 22:09:49.398: INFO: Pod "client-containers-608080b0-0712-4b24-aa4c-e7000e5a9568" satisfied condition "Succeeded or Failed" May 20 22:09:49.400: INFO: Trying to get logs from node node1 pod client-containers-608080b0-0712-4b24-aa4c-e7000e5a9568 container agnhost-container: STEP: delete the pod May 20 22:09:49.416: INFO: Waiting for pod client-containers-608080b0-0712-4b24-aa4c-e7000e5a9568 to disappear May 20 22:09:49.418: INFO: Pod client-containers-608080b0-0712-4b24-aa4c-e7000e5a9568 no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:09:49.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9007" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":329,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:09:27.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-configmap-2qg2 STEP: Creating a pod to test atomic-volume-subpath May 20 22:09:27.855: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-2qg2" in namespace "subpath-2396" to be "Succeeded or Failed" May 20 22:09:27.860: INFO: Pod "pod-subpath-test-configmap-2qg2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.519842ms May 20 22:09:29.863: INFO: Pod "pod-subpath-test-configmap-2qg2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007967312s May 20 22:09:31.868: INFO: Pod "pod-subpath-test-configmap-2qg2": Phase="Running", Reason="", readiness=true. Elapsed: 4.012181792s May 20 22:09:33.873: INFO: Pod "pod-subpath-test-configmap-2qg2": Phase="Running", Reason="", readiness=true. Elapsed: 6.017122905s May 20 22:09:35.878: INFO: Pod "pod-subpath-test-configmap-2qg2": Phase="Running", Reason="", readiness=true. Elapsed: 8.022473883s May 20 22:09:37.882: INFO: Pod "pod-subpath-test-configmap-2qg2": Phase="Running", Reason="", readiness=true. Elapsed: 10.02672523s May 20 22:09:39.887: INFO: Pod "pod-subpath-test-configmap-2qg2": Phase="Running", Reason="", readiness=true. Elapsed: 12.03142024s May 20 22:09:41.891: INFO: Pod "pod-subpath-test-configmap-2qg2": Phase="Running", Reason="", readiness=true. Elapsed: 14.035136947s May 20 22:09:43.896: INFO: Pod "pod-subpath-test-configmap-2qg2": Phase="Running", Reason="", readiness=true. Elapsed: 16.040059849s May 20 22:09:45.899: INFO: Pod "pod-subpath-test-configmap-2qg2": Phase="Running", Reason="", readiness=true. Elapsed: 18.043860694s May 20 22:09:47.903: INFO: Pod "pod-subpath-test-configmap-2qg2": Phase="Running", Reason="", readiness=true. Elapsed: 20.04741076s May 20 22:09:49.907: INFO: Pod "pod-subpath-test-configmap-2qg2": Phase="Running", Reason="", readiness=true. Elapsed: 22.051176892s May 20 22:09:51.911: INFO: Pod "pod-subpath-test-configmap-2qg2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.055981664s STEP: Saw pod success May 20 22:09:51.912: INFO: Pod "pod-subpath-test-configmap-2qg2" satisfied condition "Succeeded or Failed" May 20 22:09:51.914: INFO: Trying to get logs from node node2 pod pod-subpath-test-configmap-2qg2 container test-container-subpath-configmap-2qg2: STEP: delete the pod May 20 22:09:51.928: INFO: Waiting for pod pod-subpath-test-configmap-2qg2 to disappear May 20 22:09:51.930: INFO: Pod pod-subpath-test-configmap-2qg2 no longer exists STEP: Deleting pod pod-subpath-test-configmap-2qg2 May 20 22:09:51.930: INFO: Deleting pod "pod-subpath-test-configmap-2qg2" in namespace "subpath-2396" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:09:51.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2396" for this suite. • [SLOW TEST:24.121 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":-1,"completed":42,"skipped":766,"failed":0} S ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:09:49.443: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should get a host IP [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating pod May 20 22:09:49.482: INFO: The status of Pod pod-hostip-b86e9575-24d7-48a9-a7f4-ca19e5e73b3e is Pending, waiting for it to be Running (with Ready = true) May 20 22:09:51.486: INFO: The status of Pod pod-hostip-b86e9575-24d7-48a9-a7f4-ca19e5e73b3e is Pending, waiting for it to be Running (with Ready = true) May 20 22:09:53.486: INFO: The status of Pod pod-hostip-b86e9575-24d7-48a9-a7f4-ca19e5e73b3e is Running (Ready = true) May 20 22:09:53.491: INFO: Pod pod-hostip-b86e9575-24d7-48a9-a7f4-ca19e5e73b3e has hostIP: 10.10.190.207 [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:09:53.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7896" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Pods should get a host IP [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":335,"failed":0} [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:09:53.501: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:09:57.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-81" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":335,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:09:30.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation May 20 22:09:30.885: INFO: >>> kubeConfig: /root/.kube/config May 20 22:09:39.474: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:09:58.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3342" for this suite. • [SLOW TEST:27.401 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":-1,"completed":25,"skipped":420,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:09:58.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating Pod STEP: Reading file content from the nginx-container May 20 22:10:04.363: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-8679 PodName:pod-sharedvolume-b989b6a5-6311-4523-bb86-bb4acb8cce0e ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 22:10:04.363: INFO: >>> kubeConfig: /root/.kube/config May 20 22:10:04.447: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:10:04.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8679" for this suite. • [SLOW TEST:6.139 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 pod should support shared volumes between containers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":-1,"completed":26,"skipped":452,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:09:57.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 20 22:09:58.607: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 20 22:10:00.617: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681398, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681398, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681398, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681398, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 20 22:10:02.622: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681398, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681398, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681398, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681398, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 20 22:10:05.628: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:10:05.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5351" for this suite. STEP: Destroying namespace "webhook-5351-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.957 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]","total":-1,"completed":32,"skipped":580,"failed":0} [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:09:10.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 20 22:09:10.198: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7541 b0b56cb1-988a-4abc-9adc-06b1ea6e2820 45616 0 2022-05-20 22:09:10 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-05-20 22:09:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 20 22:09:10.199: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7541 b0b56cb1-988a-4abc-9adc-06b1ea6e2820 45616 0 2022-05-20 22:09:10 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-05-20 22:09:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 20 22:09:20.208: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7541 b0b56cb1-988a-4abc-9adc-06b1ea6e2820 45839 0 2022-05-20 22:09:10 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-05-20 22:09:20 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} May 20 22:09:20.208: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7541 b0b56cb1-988a-4abc-9adc-06b1ea6e2820 45839 0 2022-05-20 22:09:10 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-05-20 22:09:20 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 20 22:09:30.218: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7541 b0b56cb1-988a-4abc-9adc-06b1ea6e2820 46055 0 2022-05-20 22:09:10 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-05-20 22:09:20 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 20 22:09:30.218: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7541 b0b56cb1-988a-4abc-9adc-06b1ea6e2820 46055 0 2022-05-20 22:09:10 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-05-20 22:09:20 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 20 22:09:40.227: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7541 b0b56cb1-988a-4abc-9adc-06b1ea6e2820 46214 0 2022-05-20 22:09:10 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-05-20 22:09:20 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 20 22:09:40.227: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7541 b0b56cb1-988a-4abc-9adc-06b1ea6e2820 46214 0 2022-05-20 22:09:10 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-05-20 22:09:20 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 20 22:09:50.237: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7541 26f966fe-3900-4f18-933f-07596e532cd6 46332 0 2022-05-20 22:09:50 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2022-05-20 22:09:50 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 20 22:09:50.237: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7541 26f966fe-3900-4f18-933f-07596e532cd6 46332 0 2022-05-20 22:09:50 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2022-05-20 22:09:50 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 20 22:10:00.246: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7541 26f966fe-3900-4f18-933f-07596e532cd6 46525 0 2022-05-20 22:09:50 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2022-05-20 22:09:50 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 20 22:10:00.246: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7541 26f966fe-3900-4f18-933f-07596e532cd6 46525 0 2022-05-20 22:09:50 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2022-05-20 22:09:50 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:10:10.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7541" for this suite. • [SLOW TEST:60.091 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":-1,"completed":33,"skipped":580,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:10:04.467: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0666 on tmpfs May 20 22:10:04.504: INFO: Waiting up to 5m0s for pod "pod-0cb959f2-90ed-4748-93e5-73f1c390fa73" in namespace "emptydir-4617" to be "Succeeded or Failed" May 20 22:10:04.506: INFO: Pod "pod-0cb959f2-90ed-4748-93e5-73f1c390fa73": Phase="Pending", Reason="", readiness=false. Elapsed: 2.002532ms May 20 22:10:06.510: INFO: Pod "pod-0cb959f2-90ed-4748-93e5-73f1c390fa73": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005785405s May 20 22:10:08.513: INFO: Pod "pod-0cb959f2-90ed-4748-93e5-73f1c390fa73": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008716367s May 20 22:10:10.517: INFO: Pod "pod-0cb959f2-90ed-4748-93e5-73f1c390fa73": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.012769513s STEP: Saw pod success May 20 22:10:10.517: INFO: Pod "pod-0cb959f2-90ed-4748-93e5-73f1c390fa73" satisfied condition "Succeeded or Failed" May 20 22:10:10.521: INFO: Trying to get logs from node node2 pod pod-0cb959f2-90ed-4748-93e5-73f1c390fa73 container test-container: STEP: delete the pod May 20 22:10:10.535: INFO: Waiting for pod pod-0cb959f2-90ed-4748-93e5-73f1c390fa73 to disappear May 20 22:10:10.536: INFO: Pod pod-0cb959f2-90ed-4748-93e5-73f1c390fa73 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:10:10.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4617" for this suite. • [SLOW TEST:6.077 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":455,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:10:10.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars May 20 22:10:10.585: INFO: Waiting up to 5m0s for pod "downward-api-ee247475-aa10-46b9-b34e-dfd6a43afd5c" in namespace "downward-api-9349" to be "Succeeded or Failed" May 20 22:10:10.588: INFO: Pod "downward-api-ee247475-aa10-46b9-b34e-dfd6a43afd5c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.633054ms May 20 22:10:12.592: INFO: Pod "downward-api-ee247475-aa10-46b9-b34e-dfd6a43afd5c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00676287s May 20 22:10:14.595: INFO: Pod "downward-api-ee247475-aa10-46b9-b34e-dfd6a43afd5c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009722305s STEP: Saw pod success May 20 22:10:14.595: INFO: Pod "downward-api-ee247475-aa10-46b9-b34e-dfd6a43afd5c" satisfied condition "Succeeded or Failed" May 20 22:10:14.598: INFO: Trying to get logs from node node2 pod downward-api-ee247475-aa10-46b9-b34e-dfd6a43afd5c container dapi-container: STEP: delete the pod May 20 22:10:14.610: INFO: Waiting for pod downward-api-ee247475-aa10-46b9-b34e-dfd6a43afd5c to disappear May 20 22:10:14.612: INFO: Pod downward-api-ee247475-aa10-46b9-b34e-dfd6a43afd5c no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:10:14.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9349" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":457,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSS ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":-1,"completed":24,"skipped":375,"failed":0} [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:10:05.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1514 [It] should create a pod from an image when restart is Never [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 May 20 22:10:05.744: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5548 run e2e-test-httpd-pod --restart=Never --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1' May 20 22:10:05.904: INFO: stderr: "" May 20 22:10:05.904: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1518 May 20 22:10:05.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5548 delete pods e2e-test-httpd-pod' May 20 22:10:16.339: INFO: stderr: "" May 20 22:10:16.339: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:10:16.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5548" for this suite. • [SLOW TEST:10.631 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1511 should create a pod from an image when restart is Never [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":-1,"completed":25,"skipped":375,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:10:14.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-5dc289a3-984c-4986-8a26-854e3e769097 STEP: Creating a pod to test consume secrets May 20 22:10:14.684: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-940068c5-8579-44df-bfda-9e42267919a6" in namespace "projected-2637" to be "Succeeded or Failed" May 20 22:10:14.686: INFO: Pod "pod-projected-secrets-940068c5-8579-44df-bfda-9e42267919a6": Phase="Pending", Reason="", readiness=false. Elapsed: 1.954337ms May 20 22:10:16.689: INFO: Pod "pod-projected-secrets-940068c5-8579-44df-bfda-9e42267919a6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005155338s May 20 22:10:18.695: INFO: Pod "pod-projected-secrets-940068c5-8579-44df-bfda-9e42267919a6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010733869s May 20 22:10:20.700: INFO: Pod "pod-projected-secrets-940068c5-8579-44df-bfda-9e42267919a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016028013s STEP: Saw pod success May 20 22:10:20.700: INFO: Pod "pod-projected-secrets-940068c5-8579-44df-bfda-9e42267919a6" satisfied condition "Succeeded or Failed" May 20 22:10:20.702: INFO: Trying to get logs from node node2 pod pod-projected-secrets-940068c5-8579-44df-bfda-9e42267919a6 container projected-secret-volume-test: STEP: delete the pod May 20 22:10:20.714: INFO: Waiting for pod pod-projected-secrets-940068c5-8579-44df-bfda-9e42267919a6 to disappear May 20 22:10:20.716: INFO: Pod pod-projected-secrets-940068c5-8579-44df-bfda-9e42267919a6 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:10:20.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2637" for this suite. • [SLOW TEST:6.080 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":466,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSS ------------------------------ [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:10:20.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod May 20 22:10:20.763: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:10:27.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4300" for this suite. • [SLOW TEST:6.774 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should invoke init containers on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":-1,"completed":30,"skipped":472,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:10:16.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a service externalname-service with the type=ExternalName in namespace services-9092 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-9092 I0520 22:10:16.397381 27 runners.go:190] Created replication controller with name: externalname-service, namespace: services-9092, replica count: 2 I0520 22:10:19.448371 27 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0520 22:10:22.449131 27 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 20 22:10:22.449: INFO: Creating new exec pod May 20 22:10:27.468: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9092 exec execpod4shz4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' May 20 22:10:27.716: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" May 20 22:10:27.716: INFO: stdout: "externalname-service-956hf" May 20 22:10:27.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9092 exec execpod4shz4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.15.120 80' May 20 22:10:27.973: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.15.120 80\nConnection to 10.233.15.120 80 port [tcp/http] succeeded!\n" May 20 22:10:27.973: INFO: stdout: "" May 20 22:10:28.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9092 exec execpod4shz4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.15.120 80' May 20 22:10:29.237: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.15.120 80\nConnection to 10.233.15.120 80 port [tcp/http] succeeded!\n" May 20 22:10:29.237: INFO: stdout: "" May 20 22:10:29.973: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9092 exec execpod4shz4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.15.120 80' May 20 22:10:30.208: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.15.120 80\nConnection to 10.233.15.120 80 port [tcp/http] succeeded!\n" May 20 22:10:30.208: INFO: stdout: "externalname-service-7wzp2" May 20 22:10:30.208: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:10:30.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9092" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:13.876 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":-1,"completed":26,"skipped":377,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:10:30.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 20 22:10:30.268: INFO: Creating deployment "test-recreate-deployment" May 20 22:10:30.271: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 20 22:10:30.276: INFO: deployment "test-recreate-deployment" doesn't have the required revision set May 20 22:10:32.282: INFO: Waiting deployment "test-recreate-deployment" to complete May 20 22:10:32.285: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681430, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681430, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681430, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681430, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6cb8b65c46\" is progressing."}}, CollisionCount:(*int32)(nil)} May 20 22:10:34.290: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 20 22:10:34.296: INFO: Updating deployment test-recreate-deployment May 20 22:10:34.297: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 May 20 22:10:34.336: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-7310 fd216731-38b0-48b0-8420-4c82b30813bc 47135 2 2022-05-20 22:10:30 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2022-05-20 22:10:34 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2022-05-20 22:10:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004306fa8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2022-05-20 22:10:34 +0000 UTC,LastTransitionTime:2022-05-20 22:10:34 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-85d47dcb4" is progressing.,LastUpdateTime:2022-05-20 22:10:34 +0000 UTC,LastTransitionTime:2022-05-20 22:10:30 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} May 20 22:10:34.339: INFO: New ReplicaSet "test-recreate-deployment-85d47dcb4" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-85d47dcb4 deployment-7310 f63c0153-954d-4677-b372-35c6b99b4394 47134 1 2022-05-20 22:10:34 +0000 UTC map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment fd216731-38b0-48b0-8420-4c82b30813bc 0xc003940270 0xc003940271}] [] [{kube-controller-manager Update apps/v1 2022-05-20 22:10:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fd216731-38b0-48b0-8420-4c82b30813bc\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 85d47dcb4,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0039402e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 20 22:10:34.339: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 20 22:10:34.339: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-6cb8b65c46 deployment-7310 42191c0f-8e77-4f0e-9d64-0f0b7b6acf23 47122 2 2022-05-20 22:10:30 +0000 UTC map[name:sample-pod-3 pod-template-hash:6cb8b65c46] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment fd216731-38b0-48b0-8420-4c82b30813bc 0xc003940177 0xc003940178}] [] [{kube-controller-manager Update apps/v1 2022-05-20 22:10:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fd216731-38b0-48b0-8420-4c82b30813bc\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6cb8b65c46,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:6cb8b65c46] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003940208 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 20 22:10:34.342: INFO: Pod "test-recreate-deployment-85d47dcb4-77g9d" is not available: &Pod{ObjectMeta:{test-recreate-deployment-85d47dcb4-77g9d test-recreate-deployment-85d47dcb4- deployment-7310 1bcdbb31-b3fb-48e1-b154-c414a82a34e8 47136 0 2022-05-20 22:10:34 +0000 UTC map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-recreate-deployment-85d47dcb4 f63c0153-954d-4677-b372-35c6b99b4394 0xc00394071f 0xc003940730}] [] [{kube-controller-manager Update v1 2022-05-20 22:10:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f63c0153-954d-4677-b372-35c6b99b4394\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-05-20 22:10:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-mx2c9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mx2c9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:10:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:10:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:10:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:10:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:,StartTime:2022-05-20 22:10:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:10:34.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7310" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":27,"skipped":382,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:10:10.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:10:38.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1777" for this suite. • [SLOW TEST:28.078 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":-1,"completed":34,"skipped":587,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:10:38.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] should run the lifecycle of a Deployment [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Deployment STEP: waiting for Deployment to be created STEP: waiting for all Replicas to be Ready May 20 22:10:38.407: INFO: observed Deployment test-deployment in namespace deployment-8140 with ReadyReplicas 0 and labels map[test-deployment-static:true] May 20 22:10:38.407: INFO: observed Deployment test-deployment in namespace deployment-8140 with ReadyReplicas 0 and labels map[test-deployment-static:true] May 20 22:10:38.410: INFO: observed Deployment test-deployment in namespace deployment-8140 with ReadyReplicas 0 and labels map[test-deployment-static:true] May 20 22:10:38.410: INFO: observed Deployment test-deployment in namespace deployment-8140 with ReadyReplicas 0 and labels map[test-deployment-static:true] May 20 22:10:38.418: INFO: observed Deployment test-deployment in namespace deployment-8140 with ReadyReplicas 0 and labels map[test-deployment-static:true] May 20 22:10:38.418: INFO: observed Deployment test-deployment in namespace deployment-8140 with ReadyReplicas 0 and labels map[test-deployment-static:true] May 20 22:10:38.435: INFO: observed Deployment test-deployment in namespace deployment-8140 with ReadyReplicas 0 and labels map[test-deployment-static:true] May 20 22:10:38.435: INFO: observed Deployment test-deployment in namespace deployment-8140 with ReadyReplicas 0 and labels map[test-deployment-static:true] May 20 22:10:41.210: INFO: observed Deployment test-deployment in namespace deployment-8140 with ReadyReplicas 1 and labels map[test-deployment-static:true] May 20 22:10:41.210: INFO: observed Deployment test-deployment in namespace deployment-8140 with ReadyReplicas 1 and labels map[test-deployment-static:true] May 20 22:10:42.232: INFO: observed Deployment test-deployment in namespace deployment-8140 with ReadyReplicas 2 and labels map[test-deployment-static:true] STEP: patching the Deployment May 20 22:10:42.240: INFO: observed event type ADDED STEP: waiting for Replicas to scale May 20 22:10:42.241: INFO: observed Deployment test-deployment in namespace deployment-8140 with ReadyReplicas 0 May 20 22:10:42.241: INFO: observed Deployment test-deployment in namespace deployment-8140 with ReadyReplicas 0 May 20 22:10:42.241: INFO: observed Deployment test-deployment in namespace deployment-8140 with ReadyReplicas 0 May 20 22:10:42.241: INFO: observed Deployment test-deployment in namespace deployment-8140 with ReadyReplicas 0 May 20 22:10:42.241: INFO: observed Deployment test-deployment in namespace deployment-8140 with ReadyReplicas 0 May 20 22:10:42.241: INFO: observed Deployment test-deployment in namespace deployment-8140 with ReadyReplicas 0 May 20 22:10:42.241: INFO: observed Deployment test-deployment in namespace deployment-8140 with ReadyReplicas 0 May 20 22:10:42.241: INFO: observed Deployment test-deployment in namespace deployment-8140 with ReadyReplicas 0 May 20 22:10:42.242: INFO: observed Deployment test-deployment in namespace deployment-8140 with ReadyReplicas 1 May 20 22:10:42.242: INFO: observed Deployment test-deployment in namespace deployment-8140 with ReadyReplicas 1 May 20 22:10:42.242: INFO: observed Deployment test-deployment in namespace deployment-8140 with ReadyReplicas 2 May 20 22:10:42.242: INFO: observed Deployment test-deployment in namespace deployment-8140 with ReadyReplicas 2 May 20 22:10:42.242: INFO: observed Deployment test-deployment in namespace deployment-8140 with ReadyReplicas 2 May 20 22:10:42.242: INFO: observed Deployment test-deployment in namespace deployment-8140 with ReadyReplicas 2 May 20 22:10:42.244: INFO: observed Deployment test-deployment in namespace deployment-8140 with ReadyReplicas 2 May 20 22:10:42.244: INFO: observed Deployment test-deployment in namespace deployment-8140 with ReadyReplicas 2 May 20 22:10:42.251: INFO: observed Deployment test-deployment in namespace deployment-8140 with ReadyReplicas 2 May 20 22:10:42.251: INFO: observed Deployment test-deployment in namespace deployment-8140 with ReadyReplicas 2 May 20 22:10:42.257: INFO: observed Deployment test-deployment in namespace deployment-8140 with ReadyReplicas 1 May 20 22:10:42.257: INFO: observed Deployment test-deployment in namespace deployment-8140 with ReadyReplicas 1 May 20 22:10:42.263: INFO: observed Deployment test-deployment in namespace deployment-8140 with ReadyReplicas 1 May 20 22:10:42.263: INFO: observed Deployment test-deployment in namespace deployment-8140 with ReadyReplicas 1 May 20 22:10:46.855: INFO: observed Deployment test-deployment in namespace deployment-8140 with ReadyReplicas 2 May 20 22:10:46.855: INFO: observed Deployment test-deployment in namespace deployment-8140 with ReadyReplicas 2 May 20 22:10:46.867: INFO: observed Deployment test-deployment in namespace deployment-8140 with ReadyReplicas 1 STEP: listing Deployments May 20 22:10:46.871: INFO: Found test-deployment with labels: map[test-deployment:patched test-deployment-static:true] STEP: updating the Deployment May 20 22:10:46.883: INFO: observed Deployment test-deployment in namespace deployment-8140 with ReadyReplicas 1 STEP: fetching the DeploymentStatus May 20 22:10:46.890: INFO: observed Deployment test-deployment in namespace deployment-8140 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] May 20 22:10:46.890: INFO: observed Deployment test-deployment in namespace deployment-8140 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] May 20 22:10:46.895: INFO: observed Deployment test-deployment in namespace deployment-8140 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] May 20 22:10:46.902: INFO: observed Deployment test-deployment in namespace deployment-8140 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] May 20 22:10:46.907: INFO: observed Deployment test-deployment in namespace deployment-8140 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] May 20 22:10:50.974: INFO: observed Deployment test-deployment in namespace deployment-8140 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] May 20 22:10:52.056: INFO: observed Deployment test-deployment in namespace deployment-8140 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] May 20 22:10:52.068: INFO: observed Deployment test-deployment in namespace deployment-8140 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] May 20 22:10:52.082: INFO: observed Deployment test-deployment in namespace deployment-8140 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] May 20 22:10:55.408: INFO: observed Deployment test-deployment in namespace deployment-8140 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] STEP: patching the DeploymentStatus STEP: fetching the DeploymentStatus May 20 22:10:55.430: INFO: observed Deployment test-deployment in namespace deployment-8140 with ReadyReplicas 1 May 20 22:10:55.430: INFO: observed Deployment test-deployment in namespace deployment-8140 with ReadyReplicas 1 May 20 22:10:55.430: INFO: observed Deployment test-deployment in namespace deployment-8140 with ReadyReplicas 1 May 20 22:10:55.430: INFO: observed Deployment test-deployment in namespace deployment-8140 with ReadyReplicas 1 May 20 22:10:55.431: INFO: observed Deployment test-deployment in namespace deployment-8140 with ReadyReplicas 1 May 20 22:10:55.431: INFO: observed Deployment test-deployment in namespace deployment-8140 with ReadyReplicas 2 May 20 22:10:55.431: INFO: observed Deployment test-deployment in namespace deployment-8140 with ReadyReplicas 3 May 20 22:10:55.431: INFO: observed Deployment test-deployment in namespace deployment-8140 with ReadyReplicas 2 May 20 22:10:55.431: INFO: observed Deployment test-deployment in namespace deployment-8140 with ReadyReplicas 2 May 20 22:10:55.431: INFO: observed Deployment test-deployment in namespace deployment-8140 with ReadyReplicas 3 STEP: deleting the Deployment May 20 22:10:55.438: INFO: observed event type MODIFIED May 20 22:10:55.438: INFO: observed event type MODIFIED May 20 22:10:55.438: INFO: observed event type MODIFIED May 20 22:10:55.438: INFO: observed event type MODIFIED May 20 22:10:55.439: INFO: observed event type MODIFIED May 20 22:10:55.439: INFO: observed event type MODIFIED May 20 22:10:55.439: INFO: observed event type MODIFIED May 20 22:10:55.439: INFO: observed event type MODIFIED May 20 22:10:55.439: INFO: observed event type MODIFIED May 20 22:10:55.439: INFO: observed event type MODIFIED May 20 22:10:55.439: INFO: observed event type MODIFIED [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 May 20 22:10:55.441: INFO: Log out all the ReplicaSets if there is no deployment created May 20 22:10:55.446: INFO: ReplicaSet "test-deployment-748588b7cd": &ReplicaSet{ObjectMeta:{test-deployment-748588b7cd deployment-8140 97e4039b-de25-4633-bfc8-c6a34f660c6b 47469 4 2022-05-20 22:10:42 +0000 UTC map[pod-template-hash:748588b7cd test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-deployment 831a5379-53ad-4d81-b689-3cf97d6127cd 0xc00690b177 0xc00690b178}] [] [{kube-controller-manager Update apps/v1 2022-05-20 22:10:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"831a5379-53ad-4d81-b689-3cf97d6127cd\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 748588b7cd,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:748588b7cd test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/pause:3.4.1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00690b200 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:4,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 20 22:10:55.449: INFO: ReplicaSet "test-deployment-7b4c744884": &ReplicaSet{ObjectMeta:{test-deployment-7b4c744884 deployment-8140 7d08da76-d4fb-41ca-a569-e9748e1ed81d 47361 3 2022-05-20 22:10:38 +0000 UTC map[pod-template-hash:7b4c744884 test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-deployment 831a5379-53ad-4d81-b689-3cf97d6127cd 0xc00690b2b7 0xc00690b2b8}] [] [{kube-controller-manager Update apps/v1 2022-05-20 22:10:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"831a5379-53ad-4d81-b689-3cf97d6127cd\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 7b4c744884,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:7b4c744884 test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00690b330 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 20 22:10:55.452: INFO: ReplicaSet "test-deployment-85d87c6f4b": &ReplicaSet{ObjectMeta:{test-deployment-85d87c6f4b deployment-8140 b5b1e70e-4a27-4250-89d5-9211ae48fc42 47459 2 2022-05-20 22:10:46 +0000 UTC map[pod-template-hash:85d87c6f4b test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:3] [{apps/v1 Deployment test-deployment 831a5379-53ad-4d81-b689-3cf97d6127cd 0xc00690b3a7 0xc00690b3a8}] [] [{kube-controller-manager Update apps/v1 2022-05-20 22:10:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"831a5379-53ad-4d81-b689-3cf97d6127cd\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 85d87c6f4b,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:85d87c6f4b test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00690b420 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:2,FullyLabeledReplicas:2,ObservedGeneration:2,ReadyReplicas:2,AvailableReplicas:2,Conditions:[]ReplicaSetCondition{},},} May 20 22:10:55.456: INFO: pod: "test-deployment-85d87c6f4b-pq6r7": &Pod{ObjectMeta:{test-deployment-85d87c6f4b-pq6r7 test-deployment-85d87c6f4b- deployment-8140 d9ed791b-12bd-4d55-93e9-c9043fbe76b1 47422 0 2022-05-20 22:10:46 +0000 UTC map[pod-template-hash:85d87c6f4b test-deployment-static:true] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.119" ], "mac": "56:f6:a7:a3:48:b4", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.119" ], "mac": "56:f6:a7:a3:48:b4", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-deployment-85d87c6f4b b5b1e70e-4a27-4250-89d5-9211ae48fc42 0xc00690bf67 0xc00690bf68}] [] [{kube-controller-manager Update v1 2022-05-20 22:10:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b5b1e70e-4a27-4250-89d5-9211ae48fc42\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-05-20 22:10:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-05-20 22:10:52 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.119\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-jzc6w,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jzc6w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:10:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:10:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:10:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:10:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.3.119,StartTime:2022-05-20 22:10:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-05-20 22:10:51 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://fe0016e50c3ac1ef077bd59d5b41a53bcb7255daf4ed361613647f1bb043ae3d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.119,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 22:10:55.457: INFO: pod: "test-deployment-85d87c6f4b-wmzwp": &Pod{ObjectMeta:{test-deployment-85d87c6f4b-wmzwp test-deployment-85d87c6f4b- deployment-8140 2ed22f81-727a-4140-89b9-c1697afaf0a6 47458 0 2022-05-20 22:10:52 +0000 UTC map[pod-template-hash:85d87c6f4b test-deployment-static:true] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.121" ], "mac": "7e:36:89:fc:46:6d", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.121" ], "mac": "7e:36:89:fc:46:6d", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-deployment-85d87c6f4b b5b1e70e-4a27-4250-89d5-9211ae48fc42 0xc00698e28f 0xc00698e2a0}] [] [{kube-controller-manager Update v1 2022-05-20 22:10:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b5b1e70e-4a27-4250-89d5-9211ae48fc42\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-05-20 22:10:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-05-20 22:10:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.121\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-lsbxv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lsbxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:10:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:10:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:10:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:10:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.3.121,StartTime:2022-05-20 22:10:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-05-20 22:10:54 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://ee5221d7f0ae7c123dc44204335736334aede3f4819fcd19ec231a471323fee9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.121,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:10:55.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8140" for this suite. • [SLOW TEST:17.090 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run the lifecycle of a Deployment [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":-1,"completed":35,"skipped":594,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:10:55.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name secret-emptykey-test-ad05d5de-350c-463f-b91c-6e647c377e6d [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:10:55.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1369" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]","total":-1,"completed":36,"skipped":608,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":137,"failed":0} [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:08:33.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service nodeport-test with type=NodePort in namespace services-8606 STEP: creating replication controller nodeport-test in namespace services-8606 I0520 22:08:33.311205 23 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-8606, replica count: 2 I0520 22:08:36.361743 23 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0520 22:08:39.363279 23 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0520 22:08:42.363722 23 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 20 22:08:42.363: INFO: Creating new exec pod May 20 22:08:53.386: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' May 20 22:08:53.667: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n" May 20 22:08:53.667: INFO: stdout: "nodeport-test-8n2j7" May 20 22:08:53.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.20.80 80' May 20 22:08:53.915: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.20.80 80\nConnection to 10.233.20.80 80 port [tcp/http] succeeded!\n" May 20 22:08:53.915: INFO: stdout: "nodeport-test-8n2j7" May 20 22:08:53.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:08:54.202: INFO: rc: 1 May 20 22:08:54.202: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:08:55.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:08:55.434: INFO: rc: 1 May 20 22:08:55.434: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:08:56.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:08:56.448: INFO: rc: 1 May 20 22:08:56.448: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:08:57.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:08:57.457: INFO: rc: 1 May 20 22:08:57.457: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:08:58.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:08:58.725: INFO: rc: 1 May 20 22:08:58.725: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:08:59.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:08:59.602: INFO: rc: 1 May 20 22:08:59.602: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:09:00.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:09:00.468: INFO: rc: 1 May 20 22:09:00.468: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:09:01.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:09:01.465: INFO: rc: 1 May 20 22:09:01.465: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:09:02.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:09:02.420: INFO: rc: 1 May 20 22:09:02.420: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:09:03.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:09:03.504: INFO: rc: 1 May 20 22:09:03.504: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:09:04.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:09:04.812: INFO: rc: 1 May 20 22:09:04.812: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:09:05.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:09:05.504: INFO: rc: 1 May 20 22:09:05.504: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:09:06.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:09:06.586: INFO: rc: 1 May 20 22:09:06.586: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:09:07.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:09:07.688: INFO: rc: 1 May 20 22:09:07.688: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30238 + echo hostName nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:09:08.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:09:08.917: INFO: rc: 1 May 20 22:09:08.917: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:09:09.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:09:09.486: INFO: rc: 1 May 20 22:09:09.486: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:09:10.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:09:10.434: INFO: rc: 1 May 20 22:09:10.434: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:09:11.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:09:11.451: INFO: rc: 1 May 20 22:09:11.451: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:09:12.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:09:12.469: INFO: rc: 1 May 20 22:09:12.469: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:09:13.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:09:13.437: INFO: rc: 1 May 20 22:09:13.438: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:09:14.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:09:14.446: INFO: rc: 1 May 20 22:09:14.446: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:09:15.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:09:15.483: INFO: rc: 1 May 20 22:09:15.483: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:09:16.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:09:16.439: INFO: rc: 1 May 20 22:09:16.439: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:09:17.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:09:17.475: INFO: rc: 1 May 20 22:09:17.476: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:09:18.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:09:18.483: INFO: rc: 1 May 20 22:09:18.483: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:09:19.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:09:19.653: INFO: rc: 1 May 20 22:09:19.653: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:09:20.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:09:20.892: INFO: rc: 1 May 20 22:09:20.892: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:09:21.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:09:21.454: INFO: rc: 1 May 20 22:09:21.454: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:09:22.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:09:22.499: INFO: rc: 1 May 20 22:09:22.499: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:09:23.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:09:23.444: INFO: rc: 1 May 20 22:09:23.444: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:09:24.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:09:24.523: INFO: rc: 1 May 20 22:09:24.523: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:09:25.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:09:25.445: INFO: rc: 1 May 20 22:09:25.445: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:09:26.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:09:26.445: INFO: rc: 1 May 20 22:09:26.445: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:09:27.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:09:27.445: INFO: rc: 1 May 20 22:09:27.445: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:09:28.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:09:28.510: INFO: rc: 1 May 20 22:09:28.510: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:09:29.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:09:29.734: INFO: rc: 1 May 20 22:09:29.734: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:09:30.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:09:30.537: INFO: rc: 1 May 20 22:09:30.537: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:09:31.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:09:31.462: INFO: rc: 1 May 20 22:09:31.462: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:09:32.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:09:32.442: INFO: rc: 1 May 20 22:09:32.442: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:09:33.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:09:33.467: INFO: rc: 1 May 20 22:09:33.467: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:09:34.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:09:34.463: INFO: rc: 1 May 20 22:09:34.463: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:09:35.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:09:35.461: INFO: rc: 1 May 20 22:09:35.461: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:09:36.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:09:36.427: INFO: rc: 1 May 20 22:09:36.427: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:09:37.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:09:37.428: INFO: rc: 1 May 20 22:09:37.428: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:09:38.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:09:38.562: INFO: rc: 1 May 20 22:09:38.562: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:09:39.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:09:39.443: INFO: rc: 1 May 20 22:09:39.443: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:09:40.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:09:40.546: INFO: rc: 1 May 20 22:09:40.546: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:09:41.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:09:41.455: INFO: rc: 1 May 20 22:09:41.455: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:09:42.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:09:42.443: INFO: rc: 1 May 20 22:09:42.443: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:09:43.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:09:43.456: INFO: rc: 1 May 20 22:09:43.456: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:09:44.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:09:44.442: INFO: rc: 1 May 20 22:09:44.442: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30238 + echo hostName nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:09:45.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:09:45.454: INFO: rc: 1 May 20 22:09:45.454: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:09:46.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:09:46.455: INFO: rc: 1 May 20 22:09:46.455: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:09:47.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:09:47.445: INFO: rc: 1 May 20 22:09:47.445: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:09:48.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:09:48.488: INFO: rc: 1 May 20 22:09:48.488: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:09:49.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:09:49.451: INFO: rc: 1 May 20 22:09:49.451: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:09:50.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:09:50.501: INFO: rc: 1 May 20 22:09:50.501: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:09:51.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:09:51.459: INFO: rc: 1 May 20 22:09:51.459: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:09:52.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:09:52.454: INFO: rc: 1 May 20 22:09:52.454: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:09:53.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:09:53.438: INFO: rc: 1 May 20 22:09:53.438: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:09:54.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:09:54.499: INFO: rc: 1 May 20 22:09:54.499: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:09:55.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:09:55.476: INFO: rc: 1 May 20 22:09:55.476: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:09:56.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:09:56.458: INFO: rc: 1 May 20 22:09:56.458: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:09:57.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:09:57.423: INFO: rc: 1 May 20 22:09:57.423: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:09:58.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:09:58.429: INFO: rc: 1 May 20 22:09:58.429: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:09:59.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:09:59.471: INFO: rc: 1 May 20 22:09:59.471: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:10:00.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:10:00.453: INFO: rc: 1 May 20 22:10:00.453: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:10:01.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:10:01.970: INFO: rc: 1 May 20 22:10:01.970: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:10:02.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:10:02.437: INFO: rc: 1 May 20 22:10:02.437: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:10:03.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:10:03.667: INFO: rc: 1 May 20 22:10:03.667: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:10:04.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:10:04.445: INFO: rc: 1 May 20 22:10:04.445: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:10:05.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:10:05.576: INFO: rc: 1 May 20 22:10:05.576: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:10:06.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:10:06.485: INFO: rc: 1 May 20 22:10:06.485: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:10:07.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:10:07.501: INFO: rc: 1 May 20 22:10:07.502: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:10:08.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:10:08.451: INFO: rc: 1 May 20 22:10:08.451: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:10:09.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:10:09.459: INFO: rc: 1 May 20 22:10:09.459: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:10:10.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:10:10.453: INFO: rc: 1 May 20 22:10:10.453: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:10:11.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:10:11.474: INFO: rc: 1 May 20 22:10:11.474: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:10:12.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:10:12.442: INFO: rc: 1 May 20 22:10:12.442: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:10:13.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:10:13.474: INFO: rc: 1 May 20 22:10:13.474: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:10:14.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:10:14.464: INFO: rc: 1 May 20 22:10:14.464: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:10:15.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:10:16.079: INFO: rc: 1 May 20 22:10:16.079: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:10:16.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:10:16.518: INFO: rc: 1 May 20 22:10:16.518: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:10:17.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:10:17.593: INFO: rc: 1 May 20 22:10:17.593: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:10:18.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:10:18.755: INFO: rc: 1 May 20 22:10:18.755: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:10:19.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:10:19.482: INFO: rc: 1 May 20 22:10:19.482: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:10:20.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:10:20.555: INFO: rc: 1 May 20 22:10:20.555: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:10:21.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:10:21.468: INFO: rc: 1 May 20 22:10:21.468: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:10:22.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:10:22.463: INFO: rc: 1 May 20 22:10:22.463: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:10:23.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:10:23.515: INFO: rc: 1 May 20 22:10:23.515: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:10:24.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:10:24.455: INFO: rc: 1 May 20 22:10:24.455: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:10:25.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:10:25.450: INFO: rc: 1 May 20 22:10:25.450: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:10:26.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:10:26.455: INFO: rc: 1 May 20 22:10:26.455: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:10:27.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:10:27.436: INFO: rc: 1 May 20 22:10:27.436: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:10:28.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:10:28.465: INFO: rc: 1 May 20 22:10:28.465: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:10:29.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:10:29.477: INFO: rc: 1 May 20 22:10:29.477: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:10:30.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:10:30.493: INFO: rc: 1 May 20 22:10:30.493: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:10:31.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:10:31.525: INFO: rc: 1 May 20 22:10:31.526: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:10:32.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:10:32.454: INFO: rc: 1 May 20 22:10:32.454: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30238 + echo hostName nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:10:33.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:10:33.507: INFO: rc: 1 May 20 22:10:33.507: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:10:34.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:10:34.480: INFO: rc: 1 May 20 22:10:34.480: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:10:35.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:10:35.439: INFO: rc: 1 May 20 22:10:35.439: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:10:36.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:10:36.434: INFO: rc: 1 May 20 22:10:36.434: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:10:37.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:10:37.500: INFO: rc: 1 May 20 22:10:37.500: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:10:38.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:10:38.572: INFO: rc: 1 May 20 22:10:38.572: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:10:39.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:10:39.491: INFO: rc: 1 May 20 22:10:39.491: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:10:40.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:10:40.764: INFO: rc: 1 May 20 22:10:40.764: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:10:41.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:10:41.450: INFO: rc: 1 May 20 22:10:41.450: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:10:42.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:10:42.496: INFO: rc: 1 May 20 22:10:42.496: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:10:43.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:10:43.493: INFO: rc: 1 May 20 22:10:43.493: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:10:44.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:10:44.460: INFO: rc: 1 May 20 22:10:44.460: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:10:45.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:10:46.051: INFO: rc: 1 May 20 22:10:46.051: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:10:46.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:10:46.777: INFO: rc: 1 May 20 22:10:46.777: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:10:47.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:10:47.540: INFO: rc: 1 May 20 22:10:47.540: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:10:48.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:10:48.456: INFO: rc: 1 May 20 22:10:48.456: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:10:49.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:10:49.847: INFO: rc: 1 May 20 22:10:49.847: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:10:50.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:10:50.506: INFO: rc: 1 May 20 22:10:50.506: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:10:51.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:10:51.460: INFO: rc: 1 May 20 22:10:51.460: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:10:52.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:10:52.682: INFO: rc: 1 May 20 22:10:52.682: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:10:53.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:10:53.798: INFO: rc: 1 May 20 22:10:53.798: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:10:54.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:10:54.493: INFO: rc: 1 May 20 22:10:54.493: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:10:54.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238' May 20 22:10:54.734: INFO: rc: 1 May 20 22:10:54.734: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8606 exec execpodbpshq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30238: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30238 nc: connect to 10.10.190.207 port 30238 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 20 22:10:54.735: FAIL: Unexpected error: <*errors.errorString | 0xc003b13130>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30238 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30238 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func24.11() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1169 +0x265 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000703800) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc000703800) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc000703800, 0x70f99e8) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-8606". STEP: Found 17 events. May 20 22:10:54.750: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpodbpshq: { } Scheduled: Successfully assigned services-8606/execpodbpshq to node2 May 20 22:10:54.750: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for nodeport-test-8n2j7: { } Scheduled: Successfully assigned services-8606/nodeport-test-8n2j7 to node2 May 20 22:10:54.750: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for nodeport-test-pl25f: { } Scheduled: Successfully assigned services-8606/nodeport-test-pl25f to node2 May 20 22:10:54.750: INFO: At 2022-05-20 22:08:33 +0000 UTC - event for nodeport-test: {replication-controller } SuccessfulCreate: Created pod: nodeport-test-pl25f May 20 22:10:54.750: INFO: At 2022-05-20 22:08:33 +0000 UTC - event for nodeport-test: {replication-controller } SuccessfulCreate: Created pod: nodeport-test-8n2j7 May 20 22:10:54.750: INFO: At 2022-05-20 22:08:36 +0000 UTC - event for nodeport-test-8n2j7: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" May 20 22:10:54.750: INFO: At 2022-05-20 22:08:37 +0000 UTC - event for nodeport-test-8n2j7: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 410.293151ms May 20 22:10:54.750: INFO: At 2022-05-20 22:08:37 +0000 UTC - event for nodeport-test-8n2j7: {kubelet node2} Created: Created container nodeport-test May 20 22:10:54.751: INFO: At 2022-05-20 22:08:37 +0000 UTC - event for nodeport-test-pl25f: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" May 20 22:10:54.751: INFO: At 2022-05-20 22:08:37 +0000 UTC - event for nodeport-test-pl25f: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 362.871244ms May 20 22:10:54.751: INFO: At 2022-05-20 22:08:38 +0000 UTC - event for nodeport-test-8n2j7: {kubelet node2} Started: Started container nodeport-test May 20 22:10:54.751: INFO: At 2022-05-20 22:08:38 +0000 UTC - event for nodeport-test-pl25f: {kubelet node2} Started: Started container nodeport-test May 20 22:10:54.751: INFO: At 2022-05-20 22:08:38 +0000 UTC - event for nodeport-test-pl25f: {kubelet node2} Created: Created container nodeport-test May 20 22:10:54.751: INFO: At 2022-05-20 22:08:45 +0000 UTC - event for execpodbpshq: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" May 20 22:10:54.751: INFO: At 2022-05-20 22:08:45 +0000 UTC - event for execpodbpshq: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 383.620166ms May 20 22:10:54.751: INFO: At 2022-05-20 22:08:46 +0000 UTC - event for execpodbpshq: {kubelet node2} Started: Started container agnhost-container May 20 22:10:54.751: INFO: At 2022-05-20 22:08:46 +0000 UTC - event for execpodbpshq: {kubelet node2} Created: Created container agnhost-container May 20 22:10:54.754: INFO: POD NODE PHASE GRACE CONDITIONS May 20 22:10:54.754: INFO: execpodbpshq node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:08:42 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:08:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:08:48 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:08:42 +0000 UTC }] May 20 22:10:54.754: INFO: nodeport-test-8n2j7 node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:08:33 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:08:41 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:08:41 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:08:33 +0000 UTC }] May 20 22:10:54.754: INFO: nodeport-test-pl25f node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:08:33 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:08:41 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:08:41 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:08:33 +0000 UTC }] May 20 22:10:54.754: INFO: May 20 22:10:54.758: INFO: Logging node info for node master1 May 20 22:10:54.762: INFO: Node Info: &Node{ObjectMeta:{master1 b016dcf2-74b7-4456-916a-8ca363b9ccc3 47329 0 2022-05-20 20:01:28 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-05-20 20:01:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2022-05-20 20:01:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2022-05-20 20:04:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2022-05-20 20:09:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {nfd-master Update v1 2022-05-20 20:12:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-20 20:07:07 +0000 UTC,LastTransitionTime:2022-05-20 20:07:07 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-20 22:10:46 +0000 UTC,LastTransitionTime:2022-05-20 20:01:26 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-20 22:10:46 +0000 UTC,LastTransitionTime:2022-05-20 20:01:26 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-20 22:10:46 +0000 UTC,LastTransitionTime:2022-05-20 20:01:26 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-20 22:10:46 +0000 UTC,LastTransitionTime:2022-05-20 20:04:22 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e9847a94929d4465bdf672fd6e82b77d,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:a01e5bd5-a73c-4ab6-b80a-cab509b05bc6,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727687197,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:f65735add9b770eec74999948d1a43963106c14a89579d0158e1ec3a1bae070e tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 20 22:10:54.762: INFO: Logging kubelet events for node master1 May 20 22:10:54.765: INFO: Logging pods the kubelet thinks is on node master1 May 20 22:10:54.794: INFO: kube-multus-ds-amd64-k8cb6 started at 2022-05-20 20:04:18 +0000 UTC (0+1 container statuses recorded) May 20 22:10:54.794: INFO: Container kube-multus ready: true, restart count 1 May 20 22:10:54.794: INFO: container-registry-65d7c44b96-n94w5 started at 2022-05-20 20:08:47 +0000 UTC (0+2 container statuses recorded) May 20 22:10:54.794: INFO: Container docker-registry ready: true, restart count 0 May 20 22:10:54.794: INFO: Container nginx ready: true, restart count 0 May 20 22:10:54.794: INFO: prometheus-operator-585ccfb458-bl62n started at 2022-05-20 20:17:13 +0000 UTC (0+2 container statuses recorded) May 20 22:10:54.794: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 20 22:10:54.794: INFO: Container prometheus-operator ready: true, restart count 0 May 20 22:10:54.794: INFO: kube-controller-manager-master1 started at 2022-05-20 20:10:37 +0000 UTC (0+1 container statuses recorded) May 20 22:10:54.794: INFO: Container kube-controller-manager ready: true, restart count 3 May 20 22:10:54.794: INFO: kube-proxy-rgxh2 started at 2022-05-20 20:03:14 +0000 UTC (0+1 container statuses recorded) May 20 22:10:54.794: INFO: Container kube-proxy ready: true, restart count 2 May 20 22:10:54.794: INFO: kube-flannel-tzq8g started at 2022-05-20 20:04:10 +0000 UTC (1+1 container statuses recorded) May 20 22:10:54.794: INFO: Init container install-cni ready: true, restart count 2 May 20 22:10:54.794: INFO: Container kube-flannel ready: true, restart count 1 May 20 22:10:54.794: INFO: node-feature-discovery-controller-cff799f9f-nq7tc started at 2022-05-20 20:11:58 +0000 UTC (0+1 container statuses recorded) May 20 22:10:54.794: INFO: Container nfd-controller ready: true, restart count 0 May 20 22:10:54.794: INFO: node-exporter-4rvrg started at 2022-05-20 20:17:21 +0000 UTC (0+2 container statuses recorded) May 20 22:10:54.794: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 20 22:10:54.794: INFO: Container node-exporter ready: true, restart count 0 May 20 22:10:54.794: INFO: kube-scheduler-master1 started at 2022-05-20 20:20:27 +0000 UTC (0+1 container statuses recorded) May 20 22:10:54.794: INFO: Container kube-scheduler ready: true, restart count 1 May 20 22:10:54.794: INFO: kube-apiserver-master1 started at 2022-05-20 20:02:32 +0000 UTC (0+1 container statuses recorded) May 20 22:10:54.794: INFO: Container kube-apiserver ready: true, restart count 0 May 20 22:10:54.886: INFO: Latency metrics for node master1 May 20 22:10:54.886: INFO: Logging node info for node master2 May 20 22:10:54.888: INFO: Node Info: &Node{ObjectMeta:{master2 ddc04b08-e43a-4e18-a612-aa3bf7f8411e 47332 0 2022-05-20 20:01:56 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-05-20 20:01:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-05-20 20:04:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-05-20 20:04:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-05-20 20:14:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-20 20:07:03 +0000 UTC,LastTransitionTime:2022-05-20 20:07:03 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-20 22:10:46 +0000 UTC,LastTransitionTime:2022-05-20 20:01:56 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-20 22:10:46 +0000 UTC,LastTransitionTime:2022-05-20 20:01:56 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-20 22:10:46 +0000 UTC,LastTransitionTime:2022-05-20 20:01:56 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-20 22:10:46 +0000 UTC,LastTransitionTime:2022-05-20 20:04:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:63d829bfe81540169bcb84ee465e884a,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:fc4aead3-0f07-477a-9f91-3902c50ddf48,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727687197,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 20 22:10:54.888: INFO: Logging kubelet events for node master2 May 20 22:10:54.891: INFO: Logging pods the kubelet thinks is on node master2 May 20 22:10:54.911: INFO: kube-controller-manager-master2 started at 2022-05-20 20:10:36 +0000 UTC (0+1 container statuses recorded) May 20 22:10:54.911: INFO: Container kube-controller-manager ready: true, restart count 2 May 20 22:10:54.911: INFO: kube-proxy-wfzg2 started at 2022-05-20 20:03:14 +0000 UTC (0+1 container statuses recorded) May 20 22:10:54.911: INFO: Container kube-proxy ready: true, restart count 1 May 20 22:10:54.911: INFO: kube-flannel-wj7hl started at 2022-05-20 20:04:10 +0000 UTC (1+1 container statuses recorded) May 20 22:10:54.911: INFO: Init container install-cni ready: true, restart count 2 May 20 22:10:54.911: INFO: Container kube-flannel ready: true, restart count 1 May 20 22:10:54.911: INFO: coredns-8474476ff8-tjnfw started at 2022-05-20 20:04:46 +0000 UTC (0+1 container statuses recorded) May 20 22:10:54.911: INFO: Container coredns ready: true, restart count 1 May 20 22:10:54.911: INFO: dns-autoscaler-7df78bfcfb-5qj9t started at 2022-05-20 20:04:48 +0000 UTC (0+1 container statuses recorded) May 20 22:10:54.911: INFO: Container autoscaler ready: true, restart count 1 May 20 22:10:54.911: INFO: node-exporter-jfg4p started at 2022-05-20 20:17:20 +0000 UTC (0+2 container statuses recorded) May 20 22:10:54.911: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 20 22:10:54.911: INFO: Container node-exporter ready: true, restart count 0 May 20 22:10:54.911: INFO: kube-apiserver-master2 started at 2022-05-20 20:02:34 +0000 UTC (0+1 container statuses recorded) May 20 22:10:54.911: INFO: Container kube-apiserver ready: true, restart count 0 May 20 22:10:54.911: INFO: kube-multus-ds-amd64-97fkc started at 2022-05-20 20:04:18 +0000 UTC (0+1 container statuses recorded) May 20 22:10:54.911: INFO: Container kube-multus ready: true, restart count 1 May 20 22:10:54.911: INFO: kube-scheduler-master2 started at 2022-05-20 20:02:34 +0000 UTC (0+1 container statuses recorded) May 20 22:10:54.911: INFO: Container kube-scheduler ready: true, restart count 3 May 20 22:10:54.997: INFO: Latency metrics for node master2 May 20 22:10:54.997: INFO: Logging node info for node master3 May 20 22:10:55.001: INFO: Node Info: &Node{ObjectMeta:{master3 f42c1bd6-d828-4857-9180-56c73dcc370f 47341 0 2022-05-20 20:02:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-05-20 20:02:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-05-20 20:04:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-05-20 20:04:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-05-20 20:14:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-20 20:07:09 +0000 UTC,LastTransitionTime:2022-05-20 20:07:09 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-20 22:10:46 +0000 UTC,LastTransitionTime:2022-05-20 20:02:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-20 22:10:46 +0000 UTC,LastTransitionTime:2022-05-20 20:02:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-20 22:10:46 +0000 UTC,LastTransitionTime:2022-05-20 20:02:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-20 22:10:46 +0000 UTC,LastTransitionTime:2022-05-20 20:04:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6a2131d65a6f41c3b857ed7d5f7d9f9f,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:2fa6d1c6-058c-482a-97f3-d7e9e817b36a,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727687197,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 20 22:10:55.001: INFO: Logging kubelet events for node master3 May 20 22:10:55.004: INFO: Logging pods the kubelet thinks is on node master3 May 20 22:10:55.017: INFO: kube-controller-manager-master3 started at 2022-05-20 20:10:36 +0000 UTC (0+1 container statuses recorded) May 20 22:10:55.017: INFO: Container kube-controller-manager ready: true, restart count 1 May 20 22:10:55.017: INFO: kube-scheduler-master3 started at 2022-05-20 20:02:33 +0000 UTC (0+1 container statuses recorded) May 20 22:10:55.017: INFO: Container kube-scheduler ready: true, restart count 2 May 20 22:10:55.017: INFO: kube-proxy-rsqzq started at 2022-05-20 20:03:14 +0000 UTC (0+1 container statuses recorded) May 20 22:10:55.017: INFO: Container kube-proxy ready: true, restart count 2 May 20 22:10:55.017: INFO: kube-flannel-bwb5w started at 2022-05-20 20:04:10 +0000 UTC (1+1 container statuses recorded) May 20 22:10:55.017: INFO: Init container install-cni ready: true, restart count 0 May 20 22:10:55.017: INFO: Container kube-flannel ready: true, restart count 2 May 20 22:10:55.017: INFO: kube-apiserver-master3 started at 2022-05-20 20:02:05 +0000 UTC (0+1 container statuses recorded) May 20 22:10:55.017: INFO: Container kube-apiserver ready: true, restart count 0 May 20 22:10:55.017: INFO: kube-multus-ds-amd64-ch8bd started at 2022-05-20 20:04:18 +0000 UTC (0+1 container statuses recorded) May 20 22:10:55.017: INFO: Container kube-multus ready: true, restart count 1 May 20 22:10:55.017: INFO: coredns-8474476ff8-4szxh started at 2022-05-20 20:04:50 +0000 UTC (0+1 container statuses recorded) May 20 22:10:55.017: INFO: Container coredns ready: true, restart count 1 May 20 22:10:55.017: INFO: node-exporter-zgxkr started at 2022-05-20 20:17:20 +0000 UTC (0+2 container statuses recorded) May 20 22:10:55.017: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 20 22:10:55.017: INFO: Container node-exporter ready: true, restart count 0 May 20 22:10:55.099: INFO: Latency metrics for node master3 May 20 22:10:55.099: INFO: Logging node info for node node1 May 20 22:10:55.102: INFO: Node Info: &Node{ObjectMeta:{node1 65c381dd-b6f5-4e67-a327-7a45366d15af 47444 0 2022-05-20 20:03:10 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-20 20:03:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-05-20 20:03:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-05-20 20:04:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-05-20 20:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-05-20 20:15:36 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-05-20 20:15:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-20 20:07:03 +0000 UTC,LastTransitionTime:2022-05-20 20:07:03 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-20 22:10:52 +0000 UTC,LastTransitionTime:2022-05-20 20:03:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-20 22:10:52 +0000 UTC,LastTransitionTime:2022-05-20 20:03:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-20 22:10:52 +0000 UTC,LastTransitionTime:2022-05-20 20:03:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-20 22:10:52 +0000 UTC,LastTransitionTime:2022-05-20 20:04:16 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f2f0a31e38e446cda6cf4c679d8a2ef5,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:c988afd2-8149-4515-9a6f-832552c2ed2d,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003977757,},ContainerImage{Names:[localhost:30500/cmk@sha256:1b6fdb10d02a95904d28fbec7317b3044b913b4572405caf5a5b4f305481ce37 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727687197,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:bcea5fd975bec7f8eb179f896b3a007090d081bd13d974bdb01eedd94cdd88b1 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 20 22:10:55.103: INFO: Logging kubelet events for node node1 May 20 22:10:55.106: INFO: Logging pods the kubelet thinks is on node node1 May 20 22:10:55.123: INFO: node-exporter-czwvh started at 2022-05-20 20:17:20 +0000 UTC (0+2 container statuses recorded) May 20 22:10:55.123: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 20 22:10:55.123: INFO: Container node-exporter ready: true, restart count 0 May 20 22:10:55.123: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-qn9gl started at 2022-05-20 20:13:08 +0000 UTC (0+1 container statuses recorded) May 20 22:10:55.123: INFO: Container kube-sriovdp ready: true, restart count 0 May 20 22:10:55.123: INFO: prometheus-k8s-0 started at 2022-05-20 20:17:30 +0000 UTC (0+4 container statuses recorded) May 20 22:10:55.123: INFO: Container config-reloader ready: true, restart count 0 May 20 22:10:55.123: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 20 22:10:55.123: INFO: Container grafana ready: true, restart count 0 May 20 22:10:55.123: INFO: Container prometheus ready: true, restart count 1 May 20 22:10:55.123: INFO: nginx-proxy-node1 started at 2022-05-20 20:06:57 +0000 UTC (0+1 container statuses recorded) May 20 22:10:55.123: INFO: Container nginx-proxy ready: true, restart count 2 May 20 22:10:55.123: INFO: pod-handle-http-request started at 2022-05-20 22:10:34 +0000 UTC (0+1 container statuses recorded) May 20 22:10:55.123: INFO: Container agnhost-container ready: true, restart count 0 May 20 22:10:55.123: INFO: pod-with-poststart-http-hook started at 2022-05-20 22:10:38 +0000 UTC (0+1 container statuses recorded) May 20 22:10:55.123: INFO: Container pod-with-poststart-http-hook ready: false, restart count 0 May 20 22:10:55.123: INFO: collectd-875j8 started at 2022-05-20 20:21:17 +0000 UTC (0+3 container statuses recorded) May 20 22:10:55.123: INFO: Container collectd ready: true, restart count 0 May 20 22:10:55.123: INFO: Container collectd-exporter ready: true, restart count 0 May 20 22:10:55.123: INFO: Container rbac-proxy ready: true, restart count 0 May 20 22:10:55.123: INFO: cmk-init-discover-node1-vkzkd started at 2022-05-20 20:15:33 +0000 UTC (0+3 container statuses recorded) May 20 22:10:55.123: INFO: Container discover ready: false, restart count 0 May 20 22:10:55.123: INFO: Container init ready: false, restart count 0 May 20 22:10:55.123: INFO: Container install ready: false, restart count 0 May 20 22:10:55.123: INFO: pod-projected-secrets-a5ade82d-3e69-4c49-988c-b04f2b416d05 started at 2022-05-20 22:09:51 +0000 UTC (0+3 container statuses recorded) May 20 22:10:55.123: INFO: Container creates-volume-test ready: true, restart count 0 May 20 22:10:55.123: INFO: Container dels-volume-test ready: true, restart count 0 May 20 22:10:55.124: INFO: Container upds-volume-test ready: true, restart count 0 May 20 22:10:55.124: INFO: ss2-0 started at 2022-05-20 22:10:25 +0000 UTC (0+1 container statuses recorded) May 20 22:10:55.124: INFO: Container webserver ready: false, restart count 0 May 20 22:10:55.124: INFO: node-feature-discovery-worker-rh55h started at 2022-05-20 20:11:58 +0000 UTC (0+1 container statuses recorded) May 20 22:10:55.124: INFO: Container nfd-worker ready: true, restart count 0 May 20 22:10:55.124: INFO: test-pod started at 2022-05-20 22:06:36 +0000 UTC (0+1 container statuses recorded) May 20 22:10:55.124: INFO: Container webserver ready: true, restart count 0 May 20 22:10:55.124: INFO: test-webserver-78e24097-06d9-4a09-92f5-649892c8b93d started at 2022-05-20 22:08:45 +0000 UTC (0+1 container statuses recorded) May 20 22:10:55.124: INFO: Container test-webserver ready: true, restart count 0 May 20 22:10:55.124: INFO: kube-flannel-2blt7 started at 2022-05-20 20:04:10 +0000 UTC (1+1 container statuses recorded) May 20 22:10:55.124: INFO: Init container install-cni ready: true, restart count 2 May 20 22:10:55.124: INFO: Container kube-flannel ready: true, restart count 3 May 20 22:10:55.124: INFO: cmk-c5x47 started at 2022-05-20 20:16:15 +0000 UTC (0+2 container statuses recorded) May 20 22:10:55.124: INFO: Container nodereport ready: true, restart count 0 May 20 22:10:55.124: INFO: Container reconcile ready: true, restart count 0 May 20 22:10:55.124: INFO: kube-proxy-v8kzq started at 2022-05-20 20:03:14 +0000 UTC (0+1 container statuses recorded) May 20 22:10:55.124: INFO: Container kube-proxy ready: true, restart count 2 May 20 22:10:55.124: INFO: kubernetes-dashboard-785dcbb76d-6c2f8 started at 2022-05-20 20:04:50 +0000 UTC (0+1 container statuses recorded) May 20 22:10:55.124: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 20 22:10:55.124: INFO: liveness-28498808-55ef-4e2b-acf0-d537b9fa3028 started at 2022-05-20 22:09:29 +0000 UTC (0+1 container statuses recorded) May 20 22:10:55.124: INFO: Container agnhost-container ready: true, restart count 4 May 20 22:10:55.124: INFO: kube-multus-ds-amd64-krd6m started at 2022-05-20 20:04:18 +0000 UTC (0+1 container statuses recorded) May 20 22:10:55.124: INFO: Container kube-multus ready: true, restart count 1 May 20 22:10:55.310: INFO: Latency metrics for node node1 May 20 22:10:55.310: INFO: Logging node info for node node2 May 20 22:10:55.313: INFO: Node Info: &Node{ObjectMeta:{node2 a0e0a426-876d-4419-96e4-c6977ef3393c 47445 0 2022-05-20 20:03:09 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-20 20:03:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-05-20 20:03:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-05-20 20:04:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-05-20 20:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-05-20 20:15:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-05-20 20:16:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-20 20:07:03 +0000 UTC,LastTransitionTime:2022-05-20 20:07:03 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-20 22:10:52 +0000 UTC,LastTransitionTime:2022-05-20 20:03:09 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-20 22:10:52 +0000 UTC,LastTransitionTime:2022-05-20 20:03:09 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-20 22:10:52 +0000 UTC,LastTransitionTime:2022-05-20 20:03:09 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-20 22:10:52 +0000 UTC,LastTransitionTime:2022-05-20 20:07:08 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a6deb87c5d6d4ca89be50c8f447a0e3c,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:67af2183-25fe-4024-95ea-e80edf7c8695,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727687197,},ContainerImage{Names:[localhost:30500/cmk@sha256:1b6fdb10d02a95904d28fbec7317b3044b913b4572405caf5a5b4f305481ce37 localhost:30500/cmk:v1.5.1],SizeBytes:727687197,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:bcea5fd975bec7f8eb179f896b3a007090d081bd13d974bdb01eedd94cdd88b1 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:f65735add9b770eec74999948d1a43963106c14a89579d0158e1ec3a1bae070e localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 20 22:10:55.314: INFO: Logging kubelet events for node node2 May 20 22:10:55.316: INFO: Logging pods the kubelet thinks is on node node2 May 20 22:10:55.330: INFO: forbid-27551406-k5fwc started at 2022-05-20 22:06:00 +0000 UTC (0+1 container statuses recorded) May 20 22:10:55.330: INFO: Container c ready: true, restart count 0 May 20 22:10:55.330: INFO: cmk-webhook-6c9d5f8578-5kbbc started at 2022-05-20 20:16:16 +0000 UTC (0+1 container statuses recorded) May 20 22:10:55.330: INFO: Container cmk-webhook ready: true, restart count 0 May 20 22:10:55.330: INFO: nodeport-test-pl25f started at 2022-05-20 22:08:33 +0000 UTC (0+1 container statuses recorded) May 20 22:10:55.330: INFO: Container nodeport-test ready: true, restart count 0 May 20 22:10:55.330: INFO: test-deployment-748588b7cd-5j8xn started at 2022-05-20 22:10:42 +0000 UTC (0+1 container statuses recorded) May 20 22:10:55.330: INFO: Container test-deployment ready: true, restart count 0 May 20 22:10:55.330: INFO: execpodbpshq started at 2022-05-20 22:08:42 +0000 UTC (0+1 container statuses recorded) May 20 22:10:55.330: INFO: Container agnhost-container ready: true, restart count 0 May 20 22:10:55.330: INFO: kube-multus-ds-amd64-p22zp started at 2022-05-20 20:04:18 +0000 UTC (0+1 container statuses recorded) May 20 22:10:55.330: INFO: Container kube-multus ready: true, restart count 1 May 20 22:10:55.330: INFO: kubernetes-metrics-scraper-5558854cb-66r9g started at 2022-05-20 20:04:50 +0000 UTC (0+1 container statuses recorded) May 20 22:10:55.330: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 20 22:10:55.330: INFO: tas-telemetry-aware-scheduling-84ff454dfb-ddzzd started at 2022-05-20 20:20:26 +0000 UTC (0+1 container statuses recorded) May 20 22:10:55.330: INFO: Container tas-extender ready: true, restart count 0 May 20 22:10:55.330: INFO: cmk-init-discover-node2-b7gw4 started at 2022-05-20 20:15:53 +0000 UTC (0+3 container statuses recorded) May 20 22:10:55.330: INFO: Container discover ready: false, restart count 0 May 20 22:10:55.330: INFO: Container init ready: false, restart count 0 May 20 22:10:55.330: INFO: Container install ready: false, restart count 0 May 20 22:10:55.330: INFO: collectd-h4pzk started at 2022-05-20 20:21:17 +0000 UTC (0+3 container statuses recorded) May 20 22:10:55.330: INFO: Container collectd ready: true, restart count 0 May 20 22:10:55.330: INFO: Container collectd-exporter ready: true, restart count 0 May 20 22:10:55.330: INFO: Container rbac-proxy ready: true, restart count 0 May 20 22:10:55.330: INFO: nodeport-test-8n2j7 started at 2022-05-20 22:08:33 +0000 UTC (0+1 container statuses recorded) May 20 22:10:55.330: INFO: Container nodeport-test ready: true, restart count 0 May 20 22:10:55.330: INFO: test-deployment-85d87c6f4b-pq6r7 started at 2022-05-20 22:10:46 +0000 UTC (0+1 container statuses recorded) May 20 22:10:55.330: INFO: Container test-deployment ready: true, restart count 0 May 20 22:10:55.330: INFO: nginx-proxy-node2 started at 2022-05-20 20:03:09 +0000 UTC (0+1 container statuses recorded) May 20 22:10:55.330: INFO: Container nginx-proxy ready: true, restart count 2 May 20 22:10:55.330: INFO: kube-proxy-rg2fp started at 2022-05-20 20:03:14 +0000 UTC (0+1 container statuses recorded) May 20 22:10:55.330: INFO: Container kube-proxy ready: true, restart count 2 May 20 22:10:55.330: INFO: kube-flannel-jpmpd started at 2022-05-20 20:04:10 +0000 UTC (1+1 container statuses recorded) May 20 22:10:55.330: INFO: Init container install-cni ready: true, restart count 1 May 20 22:10:55.330: INFO: Container kube-flannel ready: true, restart count 2 May 20 22:10:55.330: INFO: test-deployment-85d87c6f4b-wmzwp started at 2022-05-20 22:10:52 +0000 UTC (0+1 container statuses recorded) May 20 22:10:55.330: INFO: Container test-deployment ready: false, restart count 0 May 20 22:10:55.330: INFO: node-feature-discovery-worker-nphk9 started at 2022-05-20 20:11:58 +0000 UTC (0+1 container statuses recorded) May 20 22:10:55.330: INFO: Container nfd-worker ready: true, restart count 0 May 20 22:10:55.330: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-wl7nk started at 2022-05-20 20:13:08 +0000 UTC (0+1 container statuses recorded) May 20 22:10:55.330: INFO: Container kube-sriovdp ready: true, restart count 0 May 20 22:10:55.330: INFO: cmk-9hxtl started at 2022-05-20 20:16:16 +0000 UTC (0+2 container statuses recorded) May 20 22:10:55.330: INFO: Container nodereport ready: true, restart count 0 May 20 22:10:55.330: INFO: Container reconcile ready: true, restart count 0 May 20 22:10:55.330: INFO: node-exporter-vm24n started at 2022-05-20 20:17:20 +0000 UTC (0+2 container statuses recorded) May 20 22:10:55.330: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 20 22:10:55.330: INFO: Container node-exporter ready: true, restart count 0 May 20 22:10:55.767: INFO: Latency metrics for node node2 May 20 22:10:55.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8606" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [142.499 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to create a functioning NodePort service [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 20 22:10:54.735: Unexpected error: <*errors.errorString | 0xc003b13130>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30238 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30238 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1169 ------------------------------ {"msg":"FAILED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":-1,"completed":11,"skipped":137,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:10:55.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of pod templates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create set of pod templates May 20 22:10:55.849: INFO: created test-podtemplate-1 May 20 22:10:55.851: INFO: created test-podtemplate-2 May 20 22:10:55.856: INFO: created test-podtemplate-3 STEP: get a list of pod templates with a label in the current namespace STEP: delete collection of pod templates May 20 22:10:55.858: INFO: requesting DeleteCollection of pod templates STEP: check that the list of pod templates matches the requested quantity May 20 22:10:55.867: INFO: requesting list of pod templates to confirm quantity [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:10:55.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-850" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":-1,"completed":12,"skipped":154,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:10:34.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. May 20 22:10:34.491: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) May 20 22:10:36.495: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) May 20 22:10:38.494: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the pod with lifecycle hook May 20 22:10:38.511: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) May 20 22:10:40.515: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) May 20 22:10:42.514: INFO: The status of Pod pod-with-poststart-http-hook is Running (Ready = true) STEP: check poststart hook STEP: delete the pod with lifecycle hook May 20 22:10:42.525: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 20 22:10:42.528: INFO: Pod pod-with-poststart-http-hook still exists May 20 22:10:44.529: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 20 22:10:44.532: INFO: Pod pod-with-poststart-http-hook still exists May 20 22:10:46.529: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 20 22:10:46.533: INFO: Pod pod-with-poststart-http-hook still exists May 20 22:10:48.528: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 20 22:10:48.531: INFO: Pod pod-with-poststart-http-hook still exists May 20 22:10:50.531: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 20 22:10:50.535: INFO: Pod pod-with-poststart-http-hook still exists May 20 22:10:52.528: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 20 22:10:52.531: INFO: Pod pod-with-poststart-http-hook still exists May 20 22:10:54.529: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 20 22:10:54.532: INFO: Pod pod-with-poststart-http-hook still exists May 20 22:10:56.529: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 20 22:10:56.533: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:10:56.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6682" for this suite. • [SLOW TEST:22.086 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":424,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:09:00.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-6611 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a new StatefulSet May 20 22:09:00.150: INFO: Found 0 stateful pods, waiting for 3 May 20 22:09:10.155: INFO: Found 2 stateful pods, waiting for 3 May 20 22:09:20.157: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 20 22:09:20.157: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 20 22:09:20.158: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-1 May 20 22:09:20.184: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 20 22:09:30.216: INFO: Updating stateful set ss2 May 20 22:09:30.221: INFO: Waiting for Pod statefulset-6611/ss2-2 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 STEP: Restoring Pods to the correct revision when they are deleted May 20 22:09:40.245: INFO: Found 1 stateful pods, waiting for 3 May 20 22:09:50.250: INFO: Found 2 stateful pods, waiting for 3 May 20 22:10:00.252: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 20 22:10:00.252: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 20 22:10:00.252: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 20 22:10:10.253: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 20 22:10:10.253: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 20 22:10:10.253: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 20 22:10:10.276: INFO: Updating stateful set ss2 May 20 22:10:10.283: INFO: Waiting for Pod statefulset-6611/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 May 20 22:10:20.310: INFO: Updating stateful set ss2 May 20 22:10:20.316: INFO: Waiting for StatefulSet statefulset-6611/ss2 to complete update May 20 22:10:20.316: INFO: Waiting for Pod statefulset-6611/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 May 20 22:10:30.322: INFO: Deleting all statefulset in ns statefulset-6611 May 20 22:10:30.324: INFO: Scaling statefulset ss2 to 0 May 20 22:11:00.339: INFO: Waiting for statefulset status.replicas updated to 0 May 20 22:11:00.342: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:11:00.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6611" for this suite. • [SLOW TEST:120.253 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":-1,"completed":20,"skipped":435,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSS ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":121,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:05:35.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W0520 22:05:35.876200 28 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ForbidConcurrent cronjob STEP: Ensuring a job is scheduled STEP: Ensuring exactly one is scheduled STEP: Ensuring exactly one running job exists by listing jobs explicitly STEP: Ensuring no more jobs are scheduled STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:11:01.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-4409" for this suite. • [SLOW TEST:326.058 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","total":-1,"completed":9,"skipped":121,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:11:01.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should support --unix-socket=/path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Starting the proxy May 20 22:11:01.933: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6364 proxy --unix-socket=/tmp/kubectl-proxy-unix862291725/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:11:02.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6364" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":-1,"completed":10,"skipped":121,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:09:51.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name s-test-opt-del-a35f287d-6dd7-4205-9b60-f95ec7a0a472 STEP: Creating secret with name s-test-opt-upd-2c791531-d8f6-4d00-844b-112fb6fa71e3 STEP: Creating the pod May 20 22:09:51.991: INFO: The status of Pod pod-projected-secrets-a5ade82d-3e69-4c49-988c-b04f2b416d05 is Pending, waiting for it to be Running (with Ready = true) May 20 22:09:53.996: INFO: The status of Pod pod-projected-secrets-a5ade82d-3e69-4c49-988c-b04f2b416d05 is Pending, waiting for it to be Running (with Ready = true) May 20 22:09:55.994: INFO: The status of Pod pod-projected-secrets-a5ade82d-3e69-4c49-988c-b04f2b416d05 is Pending, waiting for it to be Running (with Ready = true) May 20 22:09:57.994: INFO: The status of Pod pod-projected-secrets-a5ade82d-3e69-4c49-988c-b04f2b416d05 is Running (Ready = true) STEP: Deleting secret s-test-opt-del-a35f287d-6dd7-4205-9b60-f95ec7a0a472 STEP: Updating secret s-test-opt-upd-2c791531-d8f6-4d00-844b-112fb6fa71e3 STEP: Creating secret with name s-test-opt-create-8d579602-f996-42e6-a61e-2759be5dcdf4 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:11:06.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3316" for this suite. • [SLOW TEST:74.720 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":43,"skipped":767,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:11:02.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should test the lifecycle of a ReplicationController [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a ReplicationController STEP: waiting for RC to be added STEP: waiting for available Replicas STEP: patching ReplicationController STEP: waiting for RC to be modified STEP: patching ReplicationController status STEP: waiting for RC to be modified STEP: waiting for available Replicas STEP: fetching ReplicationController status STEP: patching ReplicationController scale STEP: waiting for RC to be modified STEP: waiting for ReplicationController's scale to be the max amount STEP: fetching ReplicationController; ensuring that it's patched STEP: updating ReplicationController status STEP: waiting for RC to be modified STEP: listing all ReplicationControllers STEP: checking that ReplicationController has expected values STEP: deleting ReplicationControllers by collection STEP: waiting for ReplicationController to have a DELETED watchEvent [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:11:09.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6540" for this suite. • [SLOW TEST:7.705 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should test the lifecycle of a ReplicationController [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":-1,"completed":11,"skipped":138,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:11:06.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name projected-secret-test-bccb6376-ea7b-4a42-923f-9d7773ca45ab STEP: Creating a pod to test consume secrets May 20 22:11:06.712: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6d1817c0-fb95-4188-afb3-51f3f263496b" in namespace "projected-4780" to be "Succeeded or Failed" May 20 22:11:06.717: INFO: Pod "pod-projected-secrets-6d1817c0-fb95-4188-afb3-51f3f263496b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.422746ms May 20 22:11:08.721: INFO: Pod "pod-projected-secrets-6d1817c0-fb95-4188-afb3-51f3f263496b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009119619s May 20 22:11:10.726: INFO: Pod "pod-projected-secrets-6d1817c0-fb95-4188-afb3-51f3f263496b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014023025s STEP: Saw pod success May 20 22:11:10.726: INFO: Pod "pod-projected-secrets-6d1817c0-fb95-4188-afb3-51f3f263496b" satisfied condition "Succeeded or Failed" May 20 22:11:10.729: INFO: Trying to get logs from node node1 pod pod-projected-secrets-6d1817c0-fb95-4188-afb3-51f3f263496b container secret-volume-test: STEP: delete the pod May 20 22:11:10.743: INFO: Waiting for pod pod-projected-secrets-6d1817c0-fb95-4188-afb3-51f3f263496b to disappear May 20 22:11:10.745: INFO: Pod pod-projected-secrets-6d1817c0-fb95-4188-afb3-51f3f263496b no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:11:10.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4780" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":44,"skipped":771,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:10:55.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 20 22:10:55.794: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 20 22:10:57.804: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681455, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681455, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681455, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681455, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} May 20 22:10:59.808: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681455, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681455, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681455, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681455, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} May 20 22:11:01.808: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681455, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681455, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681455, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681455, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 20 22:11:04.828: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 20 22:11:04.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:11:12.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-9611" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:17.396 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":-1,"completed":37,"skipped":642,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:11:10.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin May 20 22:11:10.827: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4d8e4fb3-0359-4544-a7b6-399961dad859" in namespace "projected-8375" to be "Succeeded or Failed" May 20 22:11:10.831: INFO: Pod "downwardapi-volume-4d8e4fb3-0359-4544-a7b6-399961dad859": Phase="Pending", Reason="", readiness=false. Elapsed: 3.874133ms May 20 22:11:12.835: INFO: Pod "downwardapi-volume-4d8e4fb3-0359-4544-a7b6-399961dad859": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008215218s May 20 22:11:14.839: INFO: Pod "downwardapi-volume-4d8e4fb3-0359-4544-a7b6-399961dad859": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012026492s STEP: Saw pod success May 20 22:11:14.839: INFO: Pod "downwardapi-volume-4d8e4fb3-0359-4544-a7b6-399961dad859" satisfied condition "Succeeded or Failed" May 20 22:11:14.841: INFO: Trying to get logs from node node2 pod downwardapi-volume-4d8e4fb3-0359-4544-a7b6-399961dad859 container client-container: STEP: delete the pod May 20 22:11:14.854: INFO: Waiting for pod downwardapi-volume-4d8e4fb3-0359-4544-a7b6-399961dad859 to disappear May 20 22:11:14.856: INFO: Pod downwardapi-volume-4d8e4fb3-0359-4544-a7b6-399961dad859 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:11:14.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8375" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":45,"skipped":793,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:10:56.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 [It] should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating server pod server in namespace prestop-8750 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-8750 STEP: Deleting pre-stop pod May 20 22:11:15.636: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:11:15.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-8750" for this suite. • [SLOW TEST:19.089 seconds] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":-1,"completed":29,"skipped":433,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:11:15.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if v1 is in available api versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: validating api versions May 20 22:11:15.687: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6720 api-versions' May 20 22:11:15.804: INFO: stderr: "" May 20 22:11:15.804: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ncustom.metrics.k8s.io/v1beta1\ndiscovery.k8s.io/v1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nflowcontrol.apiserver.k8s.io/v1beta1\nintel.com/v1\nk8s.cni.cncf.io/v1\nmonitoring.coreos.com/v1\nmonitoring.coreos.com/v1alpha1\nmygroup.example.com/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1\nnode.k8s.io/v1beta1\npolicy/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\ntelemetry.intel.com/v1alpha1\nv1\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:11:15.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6720" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":-1,"completed":30,"skipped":438,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:11:13.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating the pod May 20 22:11:13.059: INFO: The status of Pod annotationupdateabad03ac-b7f2-4d85-be12-d6bc637f012b is Pending, waiting for it to be Running (with Ready = true) May 20 22:11:15.062: INFO: The status of Pod annotationupdateabad03ac-b7f2-4d85-be12-d6bc637f012b is Pending, waiting for it to be Running (with Ready = true) May 20 22:11:17.062: INFO: The status of Pod annotationupdateabad03ac-b7f2-4d85-be12-d6bc637f012b is Pending, waiting for it to be Running (with Ready = true) May 20 22:11:19.064: INFO: The status of Pod annotationupdateabad03ac-b7f2-4d85-be12-d6bc637f012b is Running (Ready = true) May 20 22:11:19.634: INFO: Successfully updated pod "annotationupdateabad03ac-b7f2-4d85-be12-d6bc637f012b" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:11:21.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6619" for this suite. • [SLOW TEST:8.658 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":38,"skipped":650,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:11:15.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0777 on node default medium May 20 22:11:15.880: INFO: Waiting up to 5m0s for pod "pod-8d6c27d4-926e-4489-bbf0-c6530ed91e81" in namespace "emptydir-5123" to be "Succeeded or Failed" May 20 22:11:15.883: INFO: Pod "pod-8d6c27d4-926e-4489-bbf0-c6530ed91e81": Phase="Pending", Reason="", readiness=false. Elapsed: 2.960691ms May 20 22:11:17.887: INFO: Pod "pod-8d6c27d4-926e-4489-bbf0-c6530ed91e81": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006835739s May 20 22:11:19.890: INFO: Pod "pod-8d6c27d4-926e-4489-bbf0-c6530ed91e81": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009622378s May 20 22:11:21.894: INFO: Pod "pod-8d6c27d4-926e-4489-bbf0-c6530ed91e81": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013729633s STEP: Saw pod success May 20 22:11:21.894: INFO: Pod "pod-8d6c27d4-926e-4489-bbf0-c6530ed91e81" satisfied condition "Succeeded or Failed" May 20 22:11:21.897: INFO: Trying to get logs from node node2 pod pod-8d6c27d4-926e-4489-bbf0-c6530ed91e81 container test-container: STEP: delete the pod May 20 22:11:21.911: INFO: Waiting for pod pod-8d6c27d4-926e-4489-bbf0-c6530ed91e81 to disappear May 20 22:11:21.913: INFO: Pod pod-8d6c27d4-926e-4489-bbf0-c6530ed91e81 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:11:21.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5123" for this suite. • [SLOW TEST:6.074 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":450,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:11:14.985: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1308 STEP: creating the pod May 20 22:11:15.006: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9886 create -f -' May 20 22:11:15.414: INFO: stderr: "" May 20 22:11:15.414: INFO: stdout: "pod/pause created\n" May 20 22:11:15.414: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 20 22:11:15.414: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-9886" to be "running and ready" May 20 22:11:15.420: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 5.360202ms May 20 22:11:17.423: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008961318s May 20 22:11:19.431: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017264128s May 20 22:11:21.435: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 6.020895963s May 20 22:11:21.435: INFO: Pod "pause" satisfied condition "running and ready" May 20 22:11:21.435: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: adding the label testing-label with value testing-label-value to a pod May 20 22:11:21.435: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9886 label pods pause testing-label=testing-label-value' May 20 22:11:21.603: INFO: stderr: "" May 20 22:11:21.603: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 20 22:11:21.603: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9886 get pod pause -L testing-label' May 20 22:11:21.769: INFO: stderr: "" May 20 22:11:21.769: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 6s testing-label-value\n" STEP: removing the label testing-label of a pod May 20 22:11:21.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9886 label pods pause testing-label-' May 20 22:11:21.925: INFO: stderr: "" May 20 22:11:21.925: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 20 22:11:21.925: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9886 get pod pause -L testing-label' May 20 22:11:22.112: INFO: stderr: "" May 20 22:11:22.112: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 7s \n" [AfterEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1314 STEP: using delete to clean up resources May 20 22:11:22.113: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9886 delete --grace-period=0 --force -f -' May 20 22:11:22.250: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 20 22:11:22.250: INFO: stdout: "pod \"pause\" force deleted\n" May 20 22:11:22.250: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9886 get rc,svc -l name=pause --no-headers' May 20 22:11:22.451: INFO: stderr: "No resources found in kubectl-9886 namespace.\n" May 20 22:11:22.451: INFO: stdout: "" May 20 22:11:22.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9886 get pods -l name=pause -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 20 22:11:22.629: INFO: stderr: "" May 20 22:11:22.629: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:11:22.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9886" for this suite. • [SLOW TEST:7.653 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1306 should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":-1,"completed":46,"skipped":860,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:11:21.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-4b2f79c1-1a43-45dc-b3a1-81a3b1a5c744 STEP: Creating a pod to test consume secrets May 20 22:11:21.983: INFO: Waiting up to 5m0s for pod "pod-secrets-84c7a083-d912-4bb2-8d81-2167385a743f" in namespace "secrets-3075" to be "Succeeded or Failed" May 20 22:11:21.986: INFO: Pod "pod-secrets-84c7a083-d912-4bb2-8d81-2167385a743f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.404656ms May 20 22:11:23.990: INFO: Pod "pod-secrets-84c7a083-d912-4bb2-8d81-2167385a743f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006617473s May 20 22:11:25.993: INFO: Pod "pod-secrets-84c7a083-d912-4bb2-8d81-2167385a743f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009596285s STEP: Saw pod success May 20 22:11:25.993: INFO: Pod "pod-secrets-84c7a083-d912-4bb2-8d81-2167385a743f" satisfied condition "Succeeded or Failed" May 20 22:11:25.995: INFO: Trying to get logs from node node2 pod pod-secrets-84c7a083-d912-4bb2-8d81-2167385a743f container secret-volume-test: STEP: delete the pod May 20 22:11:26.009: INFO: Waiting for pod pod-secrets-84c7a083-d912-4bb2-8d81-2167385a743f to disappear May 20 22:11:26.012: INFO: Pod pod-secrets-84c7a083-d912-4bb2-8d81-2167385a743f no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:11:26.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3075" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":460,"failed":0} SSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:11:26.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap that has name configmap-test-emptyKey-ce952fe0-0ad6-42ff-b9d6-abcbb4f44f8a [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:11:26.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5318" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":-1,"completed":33,"skipped":463,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:11:26.200: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:11:26.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-2083" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":-1,"completed":34,"skipped":541,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:11:00.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. May 20 22:11:00.408: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) May 20 22:11:02.411: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) May 20 22:11:04.414: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) May 20 22:11:06.411: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the pod with lifecycle hook May 20 22:11:06.424: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) May 20 22:11:08.430: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) May 20 22:11:10.429: INFO: The status of Pod pod-with-prestop-http-hook is Running (Ready = true) STEP: delete the pod with lifecycle hook May 20 22:11:10.437: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 20 22:11:10.439: INFO: Pod pod-with-prestop-http-hook still exists May 20 22:11:12.440: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 20 22:11:12.443: INFO: Pod pod-with-prestop-http-hook still exists May 20 22:11:14.442: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 20 22:11:14.445: INFO: Pod pod-with-prestop-http-hook still exists May 20 22:11:16.441: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 20 22:11:16.444: INFO: Pod pod-with-prestop-http-hook still exists May 20 22:11:18.441: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 20 22:11:18.445: INFO: Pod pod-with-prestop-http-hook still exists May 20 22:11:20.441: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 20 22:11:20.444: INFO: Pod pod-with-prestop-http-hook still exists May 20 22:11:22.440: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 20 22:11:22.442: INFO: Pod pod-with-prestop-http-hook still exists May 20 22:11:24.441: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 20 22:11:24.444: INFO: Pod pod-with-prestop-http-hook still exists May 20 22:11:26.439: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 20 22:11:26.445: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:11:26.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6401" for this suite. • [SLOW TEST:26.085 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":439,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:11:26.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should run through the lifecycle of a ServiceAccount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a ServiceAccount STEP: watching for the ServiceAccount to be added STEP: patching the ServiceAccount STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) STEP: deleting the ServiceAccount [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:11:26.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-871" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":-1,"completed":22,"skipped":439,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:11:26.281: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] Replicaset should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating replica set "test-rs" that asks for more than the allowed pod quota May 20 22:11:26.307: INFO: Pod name sample-pod: Found 0 pods out of 1 May 20 22:11:31.311: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the replicaset Spec.Replicas was modified STEP: Patch a scale subresource [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:11:31.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-9102" for this suite. • [SLOW TEST:5.051 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Replicaset should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]","total":-1,"completed":35,"skipped":556,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:11:31.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create set of events May 20 22:11:31.391: INFO: created test-event-1 May 20 22:11:31.393: INFO: created test-event-2 May 20 22:11:31.396: INFO: created test-event-3 STEP: get a list of Events with a label in the current namespace STEP: delete collection of events May 20 22:11:31.399: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity May 20 22:11:31.412: INFO: requesting list of events to confirm quantity [AfterEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:11:31.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-3398" for this suite. • ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:11:26.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin May 20 22:11:26.557: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b708c580-cd6f-49fb-8104-f19358959f57" in namespace "downward-api-5282" to be "Succeeded or Failed" May 20 22:11:26.559: INFO: Pod "downwardapi-volume-b708c580-cd6f-49fb-8104-f19358959f57": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062644ms May 20 22:11:28.570: INFO: Pod "downwardapi-volume-b708c580-cd6f-49fb-8104-f19358959f57": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013039363s May 20 22:11:30.575: INFO: Pod "downwardapi-volume-b708c580-cd6f-49fb-8104-f19358959f57": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01753306s May 20 22:11:32.579: INFO: Pod "downwardapi-volume-b708c580-cd6f-49fb-8104-f19358959f57": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.021520482s STEP: Saw pod success May 20 22:11:32.579: INFO: Pod "downwardapi-volume-b708c580-cd6f-49fb-8104-f19358959f57" satisfied condition "Succeeded or Failed" May 20 22:11:32.581: INFO: Trying to get logs from node node2 pod downwardapi-volume-b708c580-cd6f-49fb-8104-f19358959f57 container client-container: STEP: delete the pod May 20 22:11:32.595: INFO: Waiting for pod downwardapi-volume-b708c580-cd6f-49fb-8104-f19358959f57 to disappear May 20 22:11:32.597: INFO: Pod downwardapi-volume-b708c580-cd6f-49fb-8104-f19358959f57 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:11:32.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5282" for this suite. • [SLOW TEST:6.079 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":447,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:11:21.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5781 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5781;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5781 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5781;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5781.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5781.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5781.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5781.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5781.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-5781.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5781.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-5781.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5781.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-5781.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5781.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-5781.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5781.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 113.54.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.54.113_udp@PTR;check="$$(dig +tcp +noall +answer +search 113.54.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.54.113_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5781 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5781;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5781 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5781;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5781.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5781.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5781.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5781.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5781.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-5781.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5781.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-5781.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5781.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-5781.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5781.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-5781.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5781.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 113.54.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.54.113_udp@PTR;check="$$(dig +tcp +noall +answer +search 113.54.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.54.113_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 20 22:11:27.741: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5781/dns-test-d75aa165-6179-4f4f-95c2-f3d491f36aea: the server could not find the requested resource (get pods dns-test-d75aa165-6179-4f4f-95c2-f3d491f36aea) May 20 22:11:27.743: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5781/dns-test-d75aa165-6179-4f4f-95c2-f3d491f36aea: the server could not find the requested resource (get pods dns-test-d75aa165-6179-4f4f-95c2-f3d491f36aea) May 20 22:11:27.747: INFO: Unable to read wheezy_udp@dns-test-service.dns-5781 from pod dns-5781/dns-test-d75aa165-6179-4f4f-95c2-f3d491f36aea: the server could not find the requested resource (get pods dns-test-d75aa165-6179-4f4f-95c2-f3d491f36aea) May 20 22:11:27.749: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5781 from pod dns-5781/dns-test-d75aa165-6179-4f4f-95c2-f3d491f36aea: the server could not find the requested resource (get pods dns-test-d75aa165-6179-4f4f-95c2-f3d491f36aea) May 20 22:11:27.751: INFO: Unable to read wheezy_udp@dns-test-service.dns-5781.svc from pod dns-5781/dns-test-d75aa165-6179-4f4f-95c2-f3d491f36aea: the server could not find the requested resource (get pods dns-test-d75aa165-6179-4f4f-95c2-f3d491f36aea) May 20 22:11:27.754: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5781.svc from pod dns-5781/dns-test-d75aa165-6179-4f4f-95c2-f3d491f36aea: the server could not find the requested resource (get pods dns-test-d75aa165-6179-4f4f-95c2-f3d491f36aea) May 20 22:11:27.756: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5781.svc from pod dns-5781/dns-test-d75aa165-6179-4f4f-95c2-f3d491f36aea: the server could not find the requested resource (get pods dns-test-d75aa165-6179-4f4f-95c2-f3d491f36aea) May 20 22:11:27.758: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5781.svc from pod dns-5781/dns-test-d75aa165-6179-4f4f-95c2-f3d491f36aea: the server could not find the requested resource (get pods dns-test-d75aa165-6179-4f4f-95c2-f3d491f36aea) May 20 22:11:27.777: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5781/dns-test-d75aa165-6179-4f4f-95c2-f3d491f36aea: the server could not find the requested resource (get pods dns-test-d75aa165-6179-4f4f-95c2-f3d491f36aea) May 20 22:11:27.779: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5781/dns-test-d75aa165-6179-4f4f-95c2-f3d491f36aea: the server could not find the requested resource (get pods dns-test-d75aa165-6179-4f4f-95c2-f3d491f36aea) May 20 22:11:27.782: INFO: Unable to read jessie_udp@dns-test-service.dns-5781 from pod dns-5781/dns-test-d75aa165-6179-4f4f-95c2-f3d491f36aea: the server could not find the requested resource (get pods dns-test-d75aa165-6179-4f4f-95c2-f3d491f36aea) May 20 22:11:27.784: INFO: Unable to read jessie_tcp@dns-test-service.dns-5781 from pod dns-5781/dns-test-d75aa165-6179-4f4f-95c2-f3d491f36aea: the server could not find the requested resource (get pods dns-test-d75aa165-6179-4f4f-95c2-f3d491f36aea) May 20 22:11:27.787: INFO: Unable to read jessie_udp@dns-test-service.dns-5781.svc from pod dns-5781/dns-test-d75aa165-6179-4f4f-95c2-f3d491f36aea: the server could not find the requested resource (get pods dns-test-d75aa165-6179-4f4f-95c2-f3d491f36aea) May 20 22:11:27.789: INFO: Unable to read jessie_tcp@dns-test-service.dns-5781.svc from pod dns-5781/dns-test-d75aa165-6179-4f4f-95c2-f3d491f36aea: the server could not find the requested resource (get pods dns-test-d75aa165-6179-4f4f-95c2-f3d491f36aea) May 20 22:11:27.791: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5781.svc from pod dns-5781/dns-test-d75aa165-6179-4f4f-95c2-f3d491f36aea: the server could not find the requested resource (get pods dns-test-d75aa165-6179-4f4f-95c2-f3d491f36aea) May 20 22:11:27.794: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5781.svc from pod dns-5781/dns-test-d75aa165-6179-4f4f-95c2-f3d491f36aea: the server could not find the requested resource (get pods dns-test-d75aa165-6179-4f4f-95c2-f3d491f36aea) May 20 22:11:27.808: INFO: Lookups using dns-5781/dns-test-d75aa165-6179-4f4f-95c2-f3d491f36aea failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5781 wheezy_tcp@dns-test-service.dns-5781 wheezy_udp@dns-test-service.dns-5781.svc wheezy_tcp@dns-test-service.dns-5781.svc wheezy_udp@_http._tcp.dns-test-service.dns-5781.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5781.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5781 jessie_tcp@dns-test-service.dns-5781 jessie_udp@dns-test-service.dns-5781.svc jessie_tcp@dns-test-service.dns-5781.svc jessie_udp@_http._tcp.dns-test-service.dns-5781.svc jessie_tcp@_http._tcp.dns-test-service.dns-5781.svc] May 20 22:11:32.886: INFO: DNS probes using dns-5781/dns-test-d75aa165-6179-4f4f-95c2-f3d491f36aea succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:11:32.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5781" for this suite. • [SLOW TEST:11.243 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":-1,"completed":39,"skipped":653,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:10:27.729: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 20 22:10:27.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR May 20 22:10:35.324: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-05-20T22:10:35Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2022-05-20T22:10:35Z]] name:name1 resourceVersion:47162 uid:4ae576fe-ad58-45cc-9908-2733f451a7c1] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR May 20 22:10:45.332: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-05-20T22:10:45Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2022-05-20T22:10:45Z]] name:name2 resourceVersion:47328 uid:596e4ceb-0216-406b-8327-a5553f8273da] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR May 20 22:10:55.339: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-05-20T22:10:35Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2022-05-20T22:10:55Z]] name:name1 resourceVersion:47455 uid:4ae576fe-ad58-45cc-9908-2733f451a7c1] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR May 20 22:11:05.346: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-05-20T22:10:45Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2022-05-20T22:11:05Z]] name:name2 resourceVersion:47771 uid:596e4ceb-0216-406b-8327-a5553f8273da] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR May 20 22:11:15.355: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-05-20T22:10:35Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2022-05-20T22:10:55Z]] name:name1 resourceVersion:48102 uid:4ae576fe-ad58-45cc-9908-2733f451a7c1] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR May 20 22:11:25.364: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-05-20T22:10:45Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2022-05-20T22:11:05Z]] name:name2 resourceVersion:48339 uid:596e4ceb-0216-406b-8327-a5553f8273da] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:11:35.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-3435" for this suite. • [SLOW TEST:68.161 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":-1,"completed":31,"skipped":577,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSS ------------------------------ {"msg":"PASSED [sig-instrumentation] Events should delete a collection of events [Conformance]","total":-1,"completed":36,"skipped":571,"failed":0} [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:11:31.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod May 20 22:11:31.459: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:11:39.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3655" for this suite. • [SLOW TEST:8.062 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":-1,"completed":37,"skipped":571,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:11:35.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0644 on node default medium May 20 22:11:35.939: INFO: Waiting up to 5m0s for pod "pod-19be0d91-f933-4ad0-8adc-0749af9c2d5d" in namespace "emptydir-2479" to be "Succeeded or Failed" May 20 22:11:35.941: INFO: Pod "pod-19be0d91-f933-4ad0-8adc-0749af9c2d5d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.165561ms May 20 22:11:37.943: INFO: Pod "pod-19be0d91-f933-4ad0-8adc-0749af9c2d5d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004805309s May 20 22:11:39.948: INFO: Pod "pod-19be0d91-f933-4ad0-8adc-0749af9c2d5d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009101443s STEP: Saw pod success May 20 22:11:39.948: INFO: Pod "pod-19be0d91-f933-4ad0-8adc-0749af9c2d5d" satisfied condition "Succeeded or Failed" May 20 22:11:39.951: INFO: Trying to get logs from node node2 pod pod-19be0d91-f933-4ad0-8adc-0749af9c2d5d container test-container: STEP: delete the pod May 20 22:11:40.042: INFO: Waiting for pod pod-19be0d91-f933-4ad0-8adc-0749af9c2d5d to disappear May 20 22:11:40.045: INFO: Pod pod-19be0d91-f933-4ad0-8adc-0749af9c2d5d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:11:40.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2479" for this suite. • ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:11:32.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes May 20 22:11:33.018: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying pod deletion was observed [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:11:44.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4141" for this suite. • [SLOW TEST:11.877 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]","total":-1,"completed":40,"skipped":687,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:11:39.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir volume type on node default medium May 20 22:11:39.557: INFO: Waiting up to 5m0s for pod "pod-752b12dc-5d1c-49b4-8869-bfa6d8236845" in namespace "emptydir-770" to be "Succeeded or Failed" May 20 22:11:39.559: INFO: Pod "pod-752b12dc-5d1c-49b4-8869-bfa6d8236845": Phase="Pending", Reason="", readiness=false. Elapsed: 2.684439ms May 20 22:11:41.563: INFO: Pod "pod-752b12dc-5d1c-49b4-8869-bfa6d8236845": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006156795s May 20 22:11:43.570: INFO: Pod "pod-752b12dc-5d1c-49b4-8869-bfa6d8236845": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013403413s May 20 22:11:45.574: INFO: Pod "pod-752b12dc-5d1c-49b4-8869-bfa6d8236845": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017151747s STEP: Saw pod success May 20 22:11:45.574: INFO: Pod "pod-752b12dc-5d1c-49b4-8869-bfa6d8236845" satisfied condition "Succeeded or Failed" May 20 22:11:45.576: INFO: Trying to get logs from node node2 pod pod-752b12dc-5d1c-49b4-8869-bfa6d8236845 container test-container: STEP: delete the pod May 20 22:11:45.587: INFO: Waiting for pod pod-752b12dc-5d1c-49b4-8869-bfa6d8236845 to disappear May 20 22:11:45.589: INFO: Pod pod-752b12dc-5d1c-49b4-8869-bfa6d8236845 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:11:45.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-770" for this suite. • [SLOW TEST:6.074 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":38,"skipped":586,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:11:44.885: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if kubectl can dry-run update Pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 May 20 22:11:44.910: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2697 run e2e-test-httpd-pod --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 --labels=run=e2e-test-httpd-pod' May 20 22:11:45.075: INFO: stderr: "" May 20 22:11:45.075: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: replace the image in the pod with server-side dry-run May 20 22:11:45.075: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2697 patch pod e2e-test-httpd-pod -p {"spec":{"containers":[{"name": "e2e-test-httpd-pod","image": "k8s.gcr.io/e2e-test-images/busybox:1.29-1"}]}} --dry-run=server' May 20 22:11:45.475: INFO: stderr: "" May 20 22:11:45.475: INFO: stdout: "pod/e2e-test-httpd-pod patched\n" STEP: verifying the pod e2e-test-httpd-pod has the right image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 May 20 22:11:45.479: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2697 delete pods e2e-test-httpd-pod' May 20 22:11:47.491: INFO: stderr: "" May 20 22:11:47.491: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:11:47.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2697" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":-1,"completed":41,"skipped":693,"failed":0} S ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:11:32.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1386 STEP: creating an pod May 20 22:11:32.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3962 run logs-generator --image=k8s.gcr.io/e2e-test-images/agnhost:2.32 --restart=Never -- logs-generator --log-lines-total 100 --run-duration 20s' May 20 22:11:32.789: INFO: stderr: "" May 20 22:11:32.789: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Waiting for log generator to start. May 20 22:11:32.789: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] May 20 22:11:32.789: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-3962" to be "running and ready, or succeeded" May 20 22:11:32.791: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078461ms May 20 22:11:34.795: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00605127s May 20 22:11:36.799: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009882669s May 20 22:11:38.806: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 6.017270763s May 20 22:11:38.806: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" May 20 22:11:38.806: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings May 20 22:11:38.806: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3962 logs logs-generator logs-generator' May 20 22:11:38.981: INFO: stderr: "" May 20 22:11:38.981: INFO: stdout: "I0520 22:11:37.081088 1 logs_generator.go:76] 0 GET /api/v1/namespaces/ns/pods/chh 390\nI0520 22:11:37.281162 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/ns/pods/9pwz 501\nI0520 22:11:37.481522 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/default/pods/gp7 259\nI0520 22:11:37.681813 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/ns/pods/4vwc 242\nI0520 22:11:37.882179 1 logs_generator.go:76] 4 GET /api/v1/namespaces/default/pods/nmg 376\nI0520 22:11:38.081479 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/4xt 468\nI0520 22:11:38.282012 1 logs_generator.go:76] 6 POST /api/v1/namespaces/ns/pods/5jk9 255\nI0520 22:11:38.481384 1 logs_generator.go:76] 7 POST /api/v1/namespaces/default/pods/ch5 438\nI0520 22:11:38.681742 1 logs_generator.go:76] 8 GET /api/v1/namespaces/default/pods/pqb 540\nI0520 22:11:38.882140 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/ns/pods/plg 318\n" STEP: limiting log lines May 20 22:11:38.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3962 logs logs-generator logs-generator --tail=1' May 20 22:11:39.142: INFO: stderr: "" May 20 22:11:39.142: INFO: stdout: "I0520 22:11:39.081439 1 logs_generator.go:76] 10 GET /api/v1/namespaces/default/pods/qst 202\n" May 20 22:11:39.142: INFO: got output "I0520 22:11:39.081439 1 logs_generator.go:76] 10 GET /api/v1/namespaces/default/pods/qst 202\n" STEP: limiting log bytes May 20 22:11:39.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3962 logs logs-generator logs-generator --limit-bytes=1' May 20 22:11:39.315: INFO: stderr: "" May 20 22:11:39.315: INFO: stdout: "I" May 20 22:11:39.315: INFO: got output "I" STEP: exposing timestamps May 20 22:11:39.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3962 logs logs-generator logs-generator --tail=1 --timestamps' May 20 22:11:39.482: INFO: stderr: "" May 20 22:11:39.482: INFO: stdout: "2022-05-20T22:11:39.282236919Z I0520 22:11:39.281918 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/ns/pods/52t7 507\n" May 20 22:11:39.482: INFO: got output "2022-05-20T22:11:39.282236919Z I0520 22:11:39.281918 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/ns/pods/52t7 507\n" STEP: restricting to a time range May 20 22:11:41.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3962 logs logs-generator logs-generator --since=1s' May 20 22:11:42.146: INFO: stderr: "" May 20 22:11:42.146: INFO: stdout: "I0520 22:11:41.281868 1 logs_generator.go:76] 21 PUT /api/v1/namespaces/default/pods/4s7 223\nI0520 22:11:41.481189 1 logs_generator.go:76] 22 POST /api/v1/namespaces/ns/pods/bk5n 368\nI0520 22:11:41.681666 1 logs_generator.go:76] 23 GET /api/v1/namespaces/kube-system/pods/95bh 287\nI0520 22:11:41.881769 1 logs_generator.go:76] 24 POST /api/v1/namespaces/ns/pods/d4n 384\nI0520 22:11:42.081362 1 logs_generator.go:76] 25 PUT /api/v1/namespaces/default/pods/lnz 361\n" May 20 22:11:42.146: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3962 logs logs-generator logs-generator --since=24h' May 20 22:11:42.335: INFO: stderr: "" May 20 22:11:42.335: INFO: stdout: "I0520 22:11:37.081088 1 logs_generator.go:76] 0 GET /api/v1/namespaces/ns/pods/chh 390\nI0520 22:11:37.281162 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/ns/pods/9pwz 501\nI0520 22:11:37.481522 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/default/pods/gp7 259\nI0520 22:11:37.681813 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/ns/pods/4vwc 242\nI0520 22:11:37.882179 1 logs_generator.go:76] 4 GET /api/v1/namespaces/default/pods/nmg 376\nI0520 22:11:38.081479 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/4xt 468\nI0520 22:11:38.282012 1 logs_generator.go:76] 6 POST /api/v1/namespaces/ns/pods/5jk9 255\nI0520 22:11:38.481384 1 logs_generator.go:76] 7 POST /api/v1/namespaces/default/pods/ch5 438\nI0520 22:11:38.681742 1 logs_generator.go:76] 8 GET /api/v1/namespaces/default/pods/pqb 540\nI0520 22:11:38.882140 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/ns/pods/plg 318\nI0520 22:11:39.081439 1 logs_generator.go:76] 10 GET /api/v1/namespaces/default/pods/qst 202\nI0520 22:11:39.281918 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/ns/pods/52t7 507\nI0520 22:11:39.481216 1 logs_generator.go:76] 12 GET /api/v1/namespaces/kube-system/pods/l7w 347\nI0520 22:11:39.681534 1 logs_generator.go:76] 13 POST /api/v1/namespaces/default/pods/4lf 347\nI0520 22:11:39.882045 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/kube-system/pods/fbk 596\nI0520 22:11:40.081226 1 logs_generator.go:76] 15 GET /api/v1/namespaces/kube-system/pods/qmf 519\nI0520 22:11:40.281423 1 logs_generator.go:76] 16 GET /api/v1/namespaces/default/pods/nq6 201\nI0520 22:11:40.481778 1 logs_generator.go:76] 17 POST /api/v1/namespaces/ns/pods/425l 253\nI0520 22:11:40.682142 1 logs_generator.go:76] 18 POST /api/v1/namespaces/ns/pods/5z4q 491\nI0520 22:11:40.881394 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/default/pods/2ksw 258\nI0520 22:11:41.081694 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/kube-system/pods/9lq8 279\nI0520 22:11:41.281868 1 logs_generator.go:76] 21 PUT /api/v1/namespaces/default/pods/4s7 223\nI0520 22:11:41.481189 1 logs_generator.go:76] 22 POST /api/v1/namespaces/ns/pods/bk5n 368\nI0520 22:11:41.681666 1 logs_generator.go:76] 23 GET /api/v1/namespaces/kube-system/pods/95bh 287\nI0520 22:11:41.881769 1 logs_generator.go:76] 24 POST /api/v1/namespaces/ns/pods/d4n 384\nI0520 22:11:42.081362 1 logs_generator.go:76] 25 PUT /api/v1/namespaces/default/pods/lnz 361\nI0520 22:11:42.281687 1 logs_generator.go:76] 26 GET /api/v1/namespaces/ns/pods/7jd 538\n" [AfterEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1391 May 20 22:11:42.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3962 delete pod logs-generator' May 20 22:11:48.890: INFO: stderr: "" May 20 22:11:48.891: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:11:48.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3962" for this suite. • [SLOW TEST:16.280 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1383 should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":-1,"completed":24,"skipped":453,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:11:47.505: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-map-046af37d-df64-4c38-9719-8f581351f1e7 STEP: Creating a pod to test consume configMaps May 20 22:11:47.543: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4f0f045c-0177-4480-8b67-c859560ab11d" in namespace "projected-2481" to be "Succeeded or Failed" May 20 22:11:47.547: INFO: Pod "pod-projected-configmaps-4f0f045c-0177-4480-8b67-c859560ab11d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.828105ms May 20 22:11:49.552: INFO: Pod "pod-projected-configmaps-4f0f045c-0177-4480-8b67-c859560ab11d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008349659s May 20 22:11:51.556: INFO: Pod "pod-projected-configmaps-4f0f045c-0177-4480-8b67-c859560ab11d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013014491s STEP: Saw pod success May 20 22:11:51.556: INFO: Pod "pod-projected-configmaps-4f0f045c-0177-4480-8b67-c859560ab11d" satisfied condition "Succeeded or Failed" May 20 22:11:51.559: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-4f0f045c-0177-4480-8b67-c859560ab11d container agnhost-container: STEP: delete the pod May 20 22:11:51.582: INFO: Waiting for pod pod-projected-configmaps-4f0f045c-0177-4480-8b67-c859560ab11d to disappear May 20 22:11:51.584: INFO: Pod pod-projected-configmaps-4f0f045c-0177-4480-8b67-c859560ab11d no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:11:51.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2481" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":42,"skipped":694,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:10:55.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod May 20 22:10:55.964: INFO: PodSpec: initContainers in spec.initContainers May 20 22:11:51.615: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-892a1d04-087f-4c51-b04f-0b34ee67950d", GenerateName:"", Namespace:"init-container-3082", SelfLink:"", UID:"7e20c02a-76be-4a27-aac5-117f8126d764", ResourceVersion:"48983", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63788681455, loc:(*time.Location)(0x9e2e180)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"964265689"}, Annotations:map[string]string{"k8s.v1.cni.cncf.io/network-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.123\"\n ],\n \"mac\": \"ea:9e:89:d2:25:73\",\n \"default\": true,\n \"dns\": {}\n}]", "k8s.v1.cni.cncf.io/networks-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.123\"\n ],\n \"mac\": \"ea:9e:89:d2:25:73\",\n \"default\": true,\n \"dns\": {}\n}]", "kubernetes.io/psp":"collectd"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0025cbdd0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0025cbde8)}, v1.ManagedFieldsEntry{Manager:"multus", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0025cbe00), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0025cbe18)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0025cbe30), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0025cbe48)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-n52dm", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc00498f000), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-n52dm", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-n52dm", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.4.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-n52dm", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002a729a8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"node2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001c7ad90), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002a72a30)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002a72a50)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002a72a58), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002a72a5c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc002a429e0), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681455, loc:(*time.Location)(0x9e2e180)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681455, loc:(*time.Location)(0x9e2e180)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681455, loc:(*time.Location)(0x9e2e180)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681455, loc:(*time.Location)(0x9e2e180)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.10.190.208", PodIP:"10.244.3.123", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.3.123"}}, StartTime:(*v1.Time)(0xc0025cbe78), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001c7af50)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001c7afc0)}, Ready:false, RestartCount:3, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"docker-pullable://k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592", ContainerID:"docker://3263382c7d1ccd88131c754558e4255d2bd6e2fca26061b23bda4a27308804ad", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00498f0a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00498f060), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.4.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc002a72adf)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:11:51.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3082" for this suite. • [SLOW TEST:55.679 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":-1,"completed":13,"skipped":189,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:11:45.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin May 20 22:11:45.656: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c4b41ba8-07fa-4a60-9b10-6ff322334187" in namespace "projected-9073" to be "Succeeded or Failed" May 20 22:11:45.659: INFO: Pod "downwardapi-volume-c4b41ba8-07fa-4a60-9b10-6ff322334187": Phase="Pending", Reason="", readiness=false. Elapsed: 2.449593ms May 20 22:11:47.662: INFO: Pod "downwardapi-volume-c4b41ba8-07fa-4a60-9b10-6ff322334187": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005856027s May 20 22:11:49.669: INFO: Pod "downwardapi-volume-c4b41ba8-07fa-4a60-9b10-6ff322334187": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012376783s May 20 22:11:51.672: INFO: Pod "downwardapi-volume-c4b41ba8-07fa-4a60-9b10-6ff322334187": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015942257s STEP: Saw pod success May 20 22:11:51.672: INFO: Pod "downwardapi-volume-c4b41ba8-07fa-4a60-9b10-6ff322334187" satisfied condition "Succeeded or Failed" May 20 22:11:51.677: INFO: Trying to get logs from node node2 pod downwardapi-volume-c4b41ba8-07fa-4a60-9b10-6ff322334187 container client-container: STEP: delete the pod May 20 22:11:51.755: INFO: Waiting for pod downwardapi-volume-c4b41ba8-07fa-4a60-9b10-6ff322334187 to disappear May 20 22:11:51.757: INFO: Pod downwardapi-volume-c4b41ba8-07fa-4a60-9b10-6ff322334187 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:11:51.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9073" for this suite. • [SLOW TEST:6.145 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":39,"skipped":598,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:11:51.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-d359b899-6c73-44b7-b637-ae84faeb48fa STEP: Creating a pod to test consume secrets May 20 22:11:51.671: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f79ac4f8-4c1c-45e6-ade0-4f69c189f3ce" in namespace "projected-2534" to be "Succeeded or Failed" May 20 22:11:51.673: INFO: Pod "pod-projected-secrets-f79ac4f8-4c1c-45e6-ade0-4f69c189f3ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.389916ms May 20 22:11:53.677: INFO: Pod "pod-projected-secrets-f79ac4f8-4c1c-45e6-ade0-4f69c189f3ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006182295s May 20 22:11:55.682: INFO: Pod "pod-projected-secrets-f79ac4f8-4c1c-45e6-ade0-4f69c189f3ce": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010853443s May 20 22:11:57.684: INFO: Pod "pod-projected-secrets-f79ac4f8-4c1c-45e6-ade0-4f69c189f3ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013247042s STEP: Saw pod success May 20 22:11:57.684: INFO: Pod "pod-projected-secrets-f79ac4f8-4c1c-45e6-ade0-4f69c189f3ce" satisfied condition "Succeeded or Failed" May 20 22:11:57.686: INFO: Trying to get logs from node node2 pod pod-projected-secrets-f79ac4f8-4c1c-45e6-ade0-4f69c189f3ce container projected-secret-volume-test: STEP: delete the pod May 20 22:11:57.700: INFO: Waiting for pod pod-projected-secrets-f79ac4f8-4c1c-45e6-ade0-4f69c189f3ce to disappear May 20 22:11:57.702: INFO: Pod pod-projected-secrets-f79ac4f8-4c1c-45e6-ade0-4f69c189f3ce no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:11:57.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2534" for this suite. • [SLOW TEST:6.074 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":43,"skipped":716,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:11:51.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 20 22:11:57.841: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:11:57.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9921" for this suite. • [SLOW TEST:6.071 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":-1,"completed":40,"skipped":608,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:11:57.865: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should delete a collection of pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create set of pods May 20 22:11:57.898: INFO: created test-pod-1 May 20 22:11:57.906: INFO: created test-pod-2 May 20 22:11:57.915: INFO: created test-pod-3 STEP: waiting for all 3 pods to be located STEP: waiting for all pods to be deleted [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:11:57.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7864" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Pods should delete a collection of pods [Conformance]","total":-1,"completed":41,"skipped":613,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:06:36.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-5852 [It] Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-5852 STEP: Creating statefulset with conflicting port in namespace statefulset-5852 STEP: Waiting until pod test-pod will start running in namespace statefulset-5852 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-5852 May 20 22:11:46.721: FAIL: Pod ss-0 expected to be re-created at least once Full Stack Trace k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000525500) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc000525500) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc000525500, 0x70f99e8) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 May 20 22:11:46.726: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5852 describe po test-pod' May 20 22:11:46.910: INFO: stderr: "" May 20 22:11:46.910: INFO: stdout: "Name: test-pod\nNamespace: statefulset-5852\nPriority: 0\nNode: node1/10.10.190.207\nStart Time: Fri, 20 May 2022 22:06:36 +0000\nLabels: \nAnnotations: k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.4.248\"\n ],\n \"mac\": \"56:e2:20:1c:f5:75\",\n \"default\": true,\n \"dns\": {}\n }]\n k8s.v1.cni.cncf.io/networks-status:\n [{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.4.248\"\n ],\n \"mac\": \"56:e2:20:1c:f5:75\",\n \"default\": true,\n \"dns\": {}\n }]\n kubernetes.io/psp: privileged\nStatus: Running\nIP: 10.244.4.248\nIPs:\n IP: 10.244.4.248\nContainers:\n webserver:\n Container ID: docker://996be438bb48b2fbf3a29d7d1780b980de07a71083eb947de970dd1feb273007\n Image: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\n Image ID: docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50\n Port: 21017/TCP\n Host Port: 21017/TCP\n State: Running\n Started: Fri, 20 May 2022 22:06:40 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fd5mt (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-fd5mt:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Pulling 5m7s kubelet Pulling image \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\"\n Normal Pulled 5m6s kubelet Successfully pulled image \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\" in 531.614455ms\n Normal Created 5m6s kubelet Created container webserver\n Normal Started 5m6s kubelet Started container webserver\n" May 20 22:11:46.911: INFO: Output of kubectl describe test-pod: Name: test-pod Namespace: statefulset-5852 Priority: 0 Node: node1/10.10.190.207 Start Time: Fri, 20 May 2022 22:06:36 +0000 Labels: Annotations: k8s.v1.cni.cncf.io/network-status: [{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.248" ], "mac": "56:e2:20:1c:f5:75", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: [{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.248" ], "mac": "56:e2:20:1c:f5:75", "default": true, "dns": {} }] kubernetes.io/psp: privileged Status: Running IP: 10.244.4.248 IPs: IP: 10.244.4.248 Containers: webserver: Container ID: docker://996be438bb48b2fbf3a29d7d1780b980de07a71083eb947de970dd1feb273007 Image: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Image ID: docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 Port: 21017/TCP Host Port: 21017/TCP State: Running Started: Fri, 20 May 2022 22:06:40 +0000 Ready: True Restart Count: 0 Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fd5mt (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-fd5mt: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Pulling 5m7s kubelet Pulling image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" Normal Pulled 5m6s kubelet Successfully pulled image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" in 531.614455ms Normal Created 5m6s kubelet Created container webserver Normal Started 5m6s kubelet Started container webserver May 20 22:11:46.911: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5852 logs test-pod --tail=100' May 20 22:11:47.090: INFO: stderr: "" May 20 22:11:47.090: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.4.248. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.4.248. Set the 'ServerName' directive globally to suppress this message\n[Fri May 20 22:06:40.972779 2022] [mpm_event:notice] [pid 1:tid 140498030889832] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Fri May 20 22:06:40.972810 2022] [core:notice] [pid 1:tid 140498030889832] AH00094: Command line: 'httpd -D FOREGROUND'\n" May 20 22:11:47.090: INFO: Last 100 log lines of test-pod: AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.4.248. Set the 'ServerName' directive globally to suppress this message AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.4.248. Set the 'ServerName' directive globally to suppress this message [Fri May 20 22:06:40.972779 2022] [mpm_event:notice] [pid 1:tid 140498030889832] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations [Fri May 20 22:06:40.972810 2022] [core:notice] [pid 1:tid 140498030889832] AH00094: Command line: 'httpd -D FOREGROUND' May 20 22:11:47.090: INFO: Deleting all statefulset in ns statefulset-5852 May 20 22:11:47.093: INFO: Scaling statefulset ss to 0 May 20 22:11:47.102: INFO: Waiting for statefulset status.replicas updated to 0 May 20 22:11:57.107: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "statefulset-5852". STEP: Found 7 events. May 20 22:11:57.121: INFO: At 2022-05-20 22:06:36 +0000 UTC - event for ss: {statefulset-controller } FailedCreate: create Pod ss-0 in StatefulSet ss failed error: pods "ss-0" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9103-9104] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9100] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: []] May 20 22:11:57.121: INFO: At 2022-05-20 22:06:36 +0000 UTC - event for ss: {statefulset-controller } FailedCreate: create Pod ss-0 in StatefulSet ss failed error: pods "ss-0" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9100] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9103-9104]] May 20 22:11:57.121: INFO: At 2022-05-20 22:06:39 +0000 UTC - event for ss: {statefulset-controller } FailedCreate: create Pod ss-0 in StatefulSet ss failed error: pods "ss-0" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9103-9104] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9100]] May 20 22:11:57.121: INFO: At 2022-05-20 22:06:39 +0000 UTC - event for test-pod: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" May 20 22:11:57.121: INFO: At 2022-05-20 22:06:40 +0000 UTC - event for test-pod: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" in 531.614455ms May 20 22:11:57.121: INFO: At 2022-05-20 22:06:40 +0000 UTC - event for test-pod: {kubelet node1} Created: Created container webserver May 20 22:11:57.121: INFO: At 2022-05-20 22:06:40 +0000 UTC - event for test-pod: {kubelet node1} Started: Started container webserver May 20 22:11:57.123: INFO: POD NODE PHASE GRACE CONDITIONS May 20 22:11:57.123: INFO: test-pod node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:06:36 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:06:42 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:06:42 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:06:36 +0000 UTC }] May 20 22:11:57.123: INFO: May 20 22:11:57.128: INFO: Logging node info for node master1 May 20 22:11:57.130: INFO: Node Info: &Node{ObjectMeta:{master1 b016dcf2-74b7-4456-916a-8ca363b9ccc3 49079 0 2022-05-20 20:01:28 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-05-20 20:01:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2022-05-20 20:01:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2022-05-20 20:04:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2022-05-20 20:09:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {nfd-master Update v1 2022-05-20 20:12:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-20 20:07:07 +0000 UTC,LastTransitionTime:2022-05-20 20:07:07 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-20 22:11:56 +0000 UTC,LastTransitionTime:2022-05-20 20:01:26 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-20 22:11:56 +0000 UTC,LastTransitionTime:2022-05-20 20:01:26 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-20 22:11:56 +0000 UTC,LastTransitionTime:2022-05-20 20:01:26 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-20 22:11:56 +0000 UTC,LastTransitionTime:2022-05-20 20:04:22 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e9847a94929d4465bdf672fd6e82b77d,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:a01e5bd5-a73c-4ab6-b80a-cab509b05bc6,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727687197,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:f65735add9b770eec74999948d1a43963106c14a89579d0158e1ec3a1bae070e tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 20 22:11:57.130: INFO: Logging kubelet events for node master1 May 20 22:11:57.133: INFO: Logging pods the kubelet thinks is on node master1 May 20 22:11:57.141: INFO: kube-multus-ds-amd64-k8cb6 started at 2022-05-20 20:04:18 +0000 UTC (0+1 container statuses recorded) May 20 22:11:57.141: INFO: Container kube-multus ready: true, restart count 1 May 20 22:11:57.141: INFO: container-registry-65d7c44b96-n94w5 started at 2022-05-20 20:08:47 +0000 UTC (0+2 container statuses recorded) May 20 22:11:57.141: INFO: Container docker-registry ready: true, restart count 0 May 20 22:11:57.141: INFO: Container nginx ready: true, restart count 0 May 20 22:11:57.141: INFO: prometheus-operator-585ccfb458-bl62n started at 2022-05-20 20:17:13 +0000 UTC (0+2 container statuses recorded) May 20 22:11:57.141: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 20 22:11:57.141: INFO: Container prometheus-operator ready: true, restart count 0 May 20 22:11:57.141: INFO: node-exporter-4rvrg started at 2022-05-20 20:17:21 +0000 UTC (0+2 container statuses recorded) May 20 22:11:57.141: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 20 22:11:57.141: INFO: Container node-exporter ready: true, restart count 0 May 20 22:11:57.142: INFO: kube-scheduler-master1 started at 2022-05-20 20:20:27 +0000 UTC (0+1 container statuses recorded) May 20 22:11:57.142: INFO: Container kube-scheduler ready: true, restart count 1 May 20 22:11:57.142: INFO: kube-apiserver-master1 started at 2022-05-20 20:02:32 +0000 UTC (0+1 container statuses recorded) May 20 22:11:57.142: INFO: Container kube-apiserver ready: true, restart count 0 May 20 22:11:57.142: INFO: kube-controller-manager-master1 started at 2022-05-20 20:10:37 +0000 UTC (0+1 container statuses recorded) May 20 22:11:57.142: INFO: Container kube-controller-manager ready: true, restart count 3 May 20 22:11:57.142: INFO: kube-proxy-rgxh2 started at 2022-05-20 20:03:14 +0000 UTC (0+1 container statuses recorded) May 20 22:11:57.142: INFO: Container kube-proxy ready: true, restart count 2 May 20 22:11:57.142: INFO: kube-flannel-tzq8g started at 2022-05-20 20:04:10 +0000 UTC (1+1 container statuses recorded) May 20 22:11:57.142: INFO: Init container install-cni ready: true, restart count 2 May 20 22:11:57.142: INFO: Container kube-flannel ready: true, restart count 1 May 20 22:11:57.142: INFO: node-feature-discovery-controller-cff799f9f-nq7tc started at 2022-05-20 20:11:58 +0000 UTC (0+1 container statuses recorded) May 20 22:11:57.142: INFO: Container nfd-controller ready: true, restart count 0 May 20 22:11:57.233: INFO: Latency metrics for node master1 May 20 22:11:57.233: INFO: Logging node info for node master2 May 20 22:11:57.235: INFO: Node Info: &Node{ObjectMeta:{master2 ddc04b08-e43a-4e18-a612-aa3bf7f8411e 49117 0 2022-05-20 20:01:56 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-05-20 20:01:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-05-20 20:04:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-05-20 20:04:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-05-20 20:14:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-20 20:07:03 +0000 UTC,LastTransitionTime:2022-05-20 20:07:03 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-20 22:11:56 +0000 UTC,LastTransitionTime:2022-05-20 20:01:56 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-20 22:11:56 +0000 UTC,LastTransitionTime:2022-05-20 20:01:56 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-20 22:11:56 +0000 UTC,LastTransitionTime:2022-05-20 20:01:56 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-20 22:11:56 +0000 UTC,LastTransitionTime:2022-05-20 20:04:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:63d829bfe81540169bcb84ee465e884a,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:fc4aead3-0f07-477a-9f91-3902c50ddf48,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727687197,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 20 22:11:57.236: INFO: Logging kubelet events for node master2 May 20 22:11:57.238: INFO: Logging pods the kubelet thinks is on node master2 May 20 22:11:57.248: INFO: kube-scheduler-master2 started at 2022-05-20 20:02:34 +0000 UTC (0+1 container statuses recorded) May 20 22:11:57.248: INFO: Container kube-scheduler ready: true, restart count 3 May 20 22:11:57.248: INFO: kube-multus-ds-amd64-97fkc started at 2022-05-20 20:04:18 +0000 UTC (0+1 container statuses recorded) May 20 22:11:57.248: INFO: Container kube-multus ready: true, restart count 1 May 20 22:11:57.248: INFO: kube-proxy-wfzg2 started at 2022-05-20 20:03:14 +0000 UTC (0+1 container statuses recorded) May 20 22:11:57.248: INFO: Container kube-proxy ready: true, restart count 1 May 20 22:11:57.248: INFO: kube-flannel-wj7hl started at 2022-05-20 20:04:10 +0000 UTC (1+1 container statuses recorded) May 20 22:11:57.248: INFO: Init container install-cni ready: true, restart count 2 May 20 22:11:57.248: INFO: Container kube-flannel ready: true, restart count 1 May 20 22:11:57.248: INFO: coredns-8474476ff8-tjnfw started at 2022-05-20 20:04:46 +0000 UTC (0+1 container statuses recorded) May 20 22:11:57.248: INFO: Container coredns ready: true, restart count 1 May 20 22:11:57.248: INFO: dns-autoscaler-7df78bfcfb-5qj9t started at 2022-05-20 20:04:48 +0000 UTC (0+1 container statuses recorded) May 20 22:11:57.248: INFO: Container autoscaler ready: true, restart count 1 May 20 22:11:57.248: INFO: node-exporter-jfg4p started at 2022-05-20 20:17:20 +0000 UTC (0+2 container statuses recorded) May 20 22:11:57.248: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 20 22:11:57.248: INFO: Container node-exporter ready: true, restart count 0 May 20 22:11:57.248: INFO: kube-apiserver-master2 started at 2022-05-20 20:02:34 +0000 UTC (0+1 container statuses recorded) May 20 22:11:57.248: INFO: Container kube-apiserver ready: true, restart count 0 May 20 22:11:57.248: INFO: kube-controller-manager-master2 started at 2022-05-20 20:10:36 +0000 UTC (0+1 container statuses recorded) May 20 22:11:57.248: INFO: Container kube-controller-manager ready: true, restart count 2 May 20 22:11:57.330: INFO: Latency metrics for node master2 May 20 22:11:57.330: INFO: Logging node info for node master3 May 20 22:11:57.333: INFO: Node Info: &Node{ObjectMeta:{master3 f42c1bd6-d828-4857-9180-56c73dcc370f 49123 0 2022-05-20 20:02:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-05-20 20:02:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-05-20 20:04:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-05-20 20:04:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-05-20 20:14:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-20 20:07:09 +0000 UTC,LastTransitionTime:2022-05-20 20:07:09 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-20 22:11:56 +0000 UTC,LastTransitionTime:2022-05-20 20:02:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-20 22:11:56 +0000 UTC,LastTransitionTime:2022-05-20 20:02:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-20 22:11:56 +0000 UTC,LastTransitionTime:2022-05-20 20:02:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-20 22:11:56 +0000 UTC,LastTransitionTime:2022-05-20 20:04:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6a2131d65a6f41c3b857ed7d5f7d9f9f,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:2fa6d1c6-058c-482a-97f3-d7e9e817b36a,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727687197,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 20 22:11:57.334: INFO: Logging kubelet events for node master3 May 20 22:11:57.335: INFO: Logging pods the kubelet thinks is on node master3 May 20 22:11:57.343: INFO: coredns-8474476ff8-4szxh started at 2022-05-20 20:04:50 +0000 UTC (0+1 container statuses recorded) May 20 22:11:57.343: INFO: Container coredns ready: true, restart count 1 May 20 22:11:57.343: INFO: node-exporter-zgxkr started at 2022-05-20 20:17:20 +0000 UTC (0+2 container statuses recorded) May 20 22:11:57.343: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 20 22:11:57.343: INFO: Container node-exporter ready: true, restart count 0 May 20 22:11:57.343: INFO: kube-apiserver-master3 started at 2022-05-20 20:02:05 +0000 UTC (0+1 container statuses recorded) May 20 22:11:57.343: INFO: Container kube-apiserver ready: true, restart count 0 May 20 22:11:57.343: INFO: kube-multus-ds-amd64-ch8bd started at 2022-05-20 20:04:18 +0000 UTC (0+1 container statuses recorded) May 20 22:11:57.343: INFO: Container kube-multus ready: true, restart count 1 May 20 22:11:57.343: INFO: kube-proxy-rsqzq started at 2022-05-20 20:03:14 +0000 UTC (0+1 container statuses recorded) May 20 22:11:57.343: INFO: Container kube-proxy ready: true, restart count 2 May 20 22:11:57.343: INFO: kube-flannel-bwb5w started at 2022-05-20 20:04:10 +0000 UTC (1+1 container statuses recorded) May 20 22:11:57.343: INFO: Init container install-cni ready: true, restart count 0 May 20 22:11:57.343: INFO: Container kube-flannel ready: true, restart count 2 May 20 22:11:57.343: INFO: kube-controller-manager-master3 started at 2022-05-20 20:10:36 +0000 UTC (0+1 container statuses recorded) May 20 22:11:57.343: INFO: Container kube-controller-manager ready: true, restart count 1 May 20 22:11:57.343: INFO: kube-scheduler-master3 started at 2022-05-20 20:02:33 +0000 UTC (0+1 container statuses recorded) May 20 22:11:57.343: INFO: Container kube-scheduler ready: true, restart count 2 May 20 22:11:57.431: INFO: Latency metrics for node master3 May 20 22:11:57.431: INFO: Logging node info for node node1 May 20 22:11:57.434: INFO: Node Info: &Node{ObjectMeta:{node1 65c381dd-b6f5-4e67-a327-7a45366d15af 49027 0 2022-05-20 20:03:10 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-20 20:03:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-05-20 20:03:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-05-20 20:04:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-05-20 20:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-05-20 20:15:36 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-05-20 20:15:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-20 20:07:03 +0000 UTC,LastTransitionTime:2022-05-20 20:07:03 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-20 22:11:53 +0000 UTC,LastTransitionTime:2022-05-20 20:03:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-20 22:11:53 +0000 UTC,LastTransitionTime:2022-05-20 20:03:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-20 22:11:53 +0000 UTC,LastTransitionTime:2022-05-20 20:03:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-20 22:11:53 +0000 UTC,LastTransitionTime:2022-05-20 20:04:16 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f2f0a31e38e446cda6cf4c679d8a2ef5,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:c988afd2-8149-4515-9a6f-832552c2ed2d,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003977757,},ContainerImage{Names:[localhost:30500/cmk@sha256:1b6fdb10d02a95904d28fbec7317b3044b913b4572405caf5a5b4f305481ce37 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727687197,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:bcea5fd975bec7f8eb179f896b3a007090d081bd13d974bdb01eedd94cdd88b1 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 20 22:11:57.435: INFO: Logging kubelet events for node node1 May 20 22:11:57.437: INFO: Logging pods the kubelet thinks is on node node1 May 20 22:11:57.452: INFO: cmk-init-discover-node1-vkzkd started at 2022-05-20 20:15:33 +0000 UTC (0+3 container statuses recorded) May 20 22:11:57.452: INFO: Container discover ready: false, restart count 0 May 20 22:11:57.452: INFO: Container init ready: false, restart count 0 May 20 22:11:57.452: INFO: Container install ready: false, restart count 0 May 20 22:11:57.452: INFO: node-feature-discovery-worker-rh55h started at 2022-05-20 20:11:58 +0000 UTC (0+1 container statuses recorded) May 20 22:11:57.452: INFO: Container nfd-worker ready: true, restart count 0 May 20 22:11:57.452: INFO: test-pod started at 2022-05-20 22:06:36 +0000 UTC (0+1 container statuses recorded) May 20 22:11:57.452: INFO: Container webserver ready: true, restart count 0 May 20 22:11:57.452: INFO: test-webserver-78e24097-06d9-4a09-92f5-649892c8b93d started at 2022-05-20 22:08:45 +0000 UTC (0+1 container statuses recorded) May 20 22:11:57.452: INFO: Container test-webserver ready: true, restart count 0 May 20 22:11:57.452: INFO: kube-flannel-2blt7 started at 2022-05-20 20:04:10 +0000 UTC (1+1 container statuses recorded) May 20 22:11:57.452: INFO: Init container install-cni ready: true, restart count 2 May 20 22:11:57.452: INFO: Container kube-flannel ready: true, restart count 3 May 20 22:11:57.452: INFO: cmk-c5x47 started at 2022-05-20 20:16:15 +0000 UTC (0+2 container statuses recorded) May 20 22:11:57.452: INFO: Container nodereport ready: true, restart count 0 May 20 22:11:57.453: INFO: Container reconcile ready: true, restart count 0 May 20 22:11:57.453: INFO: kube-proxy-v8kzq started at 2022-05-20 20:03:14 +0000 UTC (0+1 container statuses recorded) May 20 22:11:57.453: INFO: Container kube-proxy ready: true, restart count 2 May 20 22:11:57.453: INFO: kubernetes-dashboard-785dcbb76d-6c2f8 started at 2022-05-20 20:04:50 +0000 UTC (0+1 container statuses recorded) May 20 22:11:57.453: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 20 22:11:57.453: INFO: liveness-28498808-55ef-4e2b-acf0-d537b9fa3028 started at 2022-05-20 22:09:29 +0000 UTC (0+1 container statuses recorded) May 20 22:11:57.453: INFO: Container agnhost-container ready: false, restart count 4 May 20 22:11:57.453: INFO: kube-multus-ds-amd64-krd6m started at 2022-05-20 20:04:18 +0000 UTC (0+1 container statuses recorded) May 20 22:11:57.453: INFO: Container kube-multus ready: true, restart count 1 May 20 22:11:57.453: INFO: node-exporter-czwvh started at 2022-05-20 20:17:20 +0000 UTC (0+2 container statuses recorded) May 20 22:11:57.453: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 20 22:11:57.453: INFO: Container node-exporter ready: true, restart count 0 May 20 22:11:57.453: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-qn9gl started at 2022-05-20 20:13:08 +0000 UTC (0+1 container statuses recorded) May 20 22:11:57.453: INFO: Container kube-sriovdp ready: true, restart count 0 May 20 22:11:57.453: INFO: prometheus-k8s-0 started at 2022-05-20 20:17:30 +0000 UTC (0+4 container statuses recorded) May 20 22:11:57.453: INFO: Container config-reloader ready: true, restart count 0 May 20 22:11:57.453: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 20 22:11:57.453: INFO: Container grafana ready: true, restart count 0 May 20 22:11:57.453: INFO: Container prometheus ready: true, restart count 1 May 20 22:11:57.453: INFO: netserver-0 started at 2022-05-20 22:11:40 +0000 UTC (0+1 container statuses recorded) May 20 22:11:57.453: INFO: Container webserver ready: false, restart count 0 May 20 22:11:57.453: INFO: ss2-0 started at 2022-05-20 22:11:09 +0000 UTC (0+1 container statuses recorded) May 20 22:11:57.453: INFO: Container webserver ready: true, restart count 0 May 20 22:11:57.453: INFO: ss2-1 started at 2022-05-20 22:11:13 +0000 UTC (0+1 container statuses recorded) May 20 22:11:57.453: INFO: Container webserver ready: true, restart count 0 May 20 22:11:57.453: INFO: nginx-proxy-node1 started at 2022-05-20 20:06:57 +0000 UTC (0+1 container statuses recorded) May 20 22:11:57.453: INFO: Container nginx-proxy ready: true, restart count 2 May 20 22:11:57.453: INFO: collectd-875j8 started at 2022-05-20 20:21:17 +0000 UTC (0+3 container statuses recorded) May 20 22:11:57.453: INFO: Container collectd ready: true, restart count 0 May 20 22:11:57.453: INFO: Container collectd-exporter ready: true, restart count 0 May 20 22:11:57.453: INFO: Container rbac-proxy ready: true, restart count 0 May 20 22:11:57.668: INFO: Latency metrics for node node1 May 20 22:11:57.668: INFO: Logging node info for node node2 May 20 22:11:57.671: INFO: Node Info: &Node{ObjectMeta:{node2 a0e0a426-876d-4419-96e4-c6977ef3393c 49035 0 2022-05-20 20:03:09 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-20 20:03:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-05-20 20:03:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-05-20 20:04:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-05-20 20:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-05-20 20:15:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-05-20 20:16:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-20 20:07:03 +0000 UTC,LastTransitionTime:2022-05-20 20:07:03 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-20 22:11:53 +0000 UTC,LastTransitionTime:2022-05-20 20:03:09 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-20 22:11:53 +0000 UTC,LastTransitionTime:2022-05-20 20:03:09 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-20 22:11:53 +0000 UTC,LastTransitionTime:2022-05-20 20:03:09 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-20 22:11:53 +0000 UTC,LastTransitionTime:2022-05-20 20:07:08 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a6deb87c5d6d4ca89be50c8f447a0e3c,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:67af2183-25fe-4024-95ea-e80edf7c8695,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727687197,},ContainerImage{Names:[localhost:30500/cmk@sha256:1b6fdb10d02a95904d28fbec7317b3044b913b4572405caf5a5b4f305481ce37 localhost:30500/cmk:v1.5.1],SizeBytes:727687197,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:bcea5fd975bec7f8eb179f896b3a007090d081bd13d974bdb01eedd94cdd88b1 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:f65735add9b770eec74999948d1a43963106c14a89579d0158e1ec3a1bae070e localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 20 22:11:57.673: INFO: Logging kubelet events for node node2 May 20 22:11:57.676: INFO: Logging pods the kubelet thinks is on node node2 May 20 22:11:57.690: INFO: nginx-proxy-node2 started at 2022-05-20 20:03:09 +0000 UTC (0+1 container statuses recorded) May 20 22:11:57.690: INFO: Container nginx-proxy ready: true, restart count 2 May 20 22:11:57.690: INFO: kube-proxy-rg2fp started at 2022-05-20 20:03:14 +0000 UTC (0+1 container statuses recorded) May 20 22:11:57.690: INFO: Container kube-proxy ready: true, restart count 2 May 20 22:11:57.690: INFO: kube-flannel-jpmpd started at 2022-05-20 20:04:10 +0000 UTC (1+1 container statuses recorded) May 20 22:11:57.690: INFO: Init container install-cni ready: true, restart count 1 May 20 22:11:57.690: INFO: Container kube-flannel ready: true, restart count 2 May 20 22:11:57.690: INFO: netserver-1 started at 2022-05-20 22:11:40 +0000 UTC (0+1 container statuses recorded) May 20 22:11:57.690: INFO: Container webserver ready: false, restart count 0 May 20 22:11:57.690: INFO: busybox-scheduling-457eae15-9863-4e73-a83e-1c18f9204485 started at 2022-05-20 22:11:51 +0000 UTC (0+1 container statuses recorded) May 20 22:11:57.690: INFO: Container busybox-scheduling-457eae15-9863-4e73-a83e-1c18f9204485 ready: false, restart count 0 May 20 22:11:57.690: INFO: termination-message-container8062c24d-d4a6-4d19-a09c-8f046cd89410 started at 2022-05-20 22:11:51 +0000 UTC (0+1 container statuses recorded) May 20 22:11:57.690: INFO: Container termination-message-container ready: false, restart count 0 May 20 22:11:57.690: INFO: node-feature-discovery-worker-nphk9 started at 2022-05-20 20:11:58 +0000 UTC (0+1 container statuses recorded) May 20 22:11:57.690: INFO: Container nfd-worker ready: true, restart count 0 May 20 22:11:57.690: INFO: pod-init-892a1d04-087f-4c51-b04f-0b34ee67950d started at 2022-05-20 22:10:55 +0000 UTC (2+1 container statuses recorded) May 20 22:11:57.690: INFO: Init container init1 ready: false, restart count 3 May 20 22:11:57.690: INFO: Init container init2 ready: false, restart count 0 May 20 22:11:57.690: INFO: Container run1 ready: false, restart count 0 May 20 22:11:57.690: INFO: pod-projected-secrets-f79ac4f8-4c1c-45e6-ade0-4f69c189f3ce started at 2022-05-20 22:11:51 +0000 UTC (0+1 container statuses recorded) May 20 22:11:57.690: INFO: Container projected-secret-volume-test ready: false, restart count 0 May 20 22:11:57.690: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-wl7nk started at 2022-05-20 20:13:08 +0000 UTC (0+1 container statuses recorded) May 20 22:11:57.690: INFO: Container kube-sriovdp ready: true, restart count 0 May 20 22:11:57.690: INFO: cmk-9hxtl started at 2022-05-20 20:16:16 +0000 UTC (0+2 container statuses recorded) May 20 22:11:57.690: INFO: Container nodereport ready: true, restart count 0 May 20 22:11:57.690: INFO: Container reconcile ready: true, restart count 0 May 20 22:11:57.690: INFO: node-exporter-vm24n started at 2022-05-20 20:17:20 +0000 UTC (0+2 container statuses recorded) May 20 22:11:57.690: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 20 22:11:57.690: INFO: Container node-exporter ready: true, restart count 0 May 20 22:11:57.690: INFO: ss2-2 started at 2022-05-20 22:11:55 +0000 UTC (0+1 container statuses recorded) May 20 22:11:57.690: INFO: Container webserver ready: false, restart count 0 May 20 22:11:57.690: INFO: liveness-84260980-5b9b-4ca1-ad66-c01371d43ddb started at 2022-05-20 22:11:22 +0000 UTC (0+1 container statuses recorded) May 20 22:11:57.690: INFO: Container agnhost-container ready: true, restart count 0 May 20 22:11:57.690: INFO: cmk-webhook-6c9d5f8578-5kbbc started at 2022-05-20 20:16:16 +0000 UTC (0+1 container statuses recorded) May 20 22:11:57.690: INFO: Container cmk-webhook ready: true, restart count 0 May 20 22:11:57.690: INFO: kube-multus-ds-amd64-p22zp started at 2022-05-20 20:04:18 +0000 UTC (0+1 container statuses recorded) May 20 22:11:57.690: INFO: Container kube-multus ready: true, restart count 1 May 20 22:11:57.690: INFO: kubernetes-metrics-scraper-5558854cb-66r9g started at 2022-05-20 20:04:50 +0000 UTC (0+1 container statuses recorded) May 20 22:11:57.690: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 20 22:11:57.690: INFO: tas-telemetry-aware-scheduling-84ff454dfb-ddzzd started at 2022-05-20 20:20:26 +0000 UTC (0+1 container statuses recorded) May 20 22:11:57.690: INFO: Container tas-extender ready: true, restart count 0 May 20 22:11:57.690: INFO: cmk-init-discover-node2-b7gw4 started at 2022-05-20 20:15:53 +0000 UTC (0+3 container statuses recorded) May 20 22:11:57.690: INFO: Container discover ready: false, restart count 0 May 20 22:11:57.691: INFO: Container init ready: false, restart count 0 May 20 22:11:57.691: INFO: Container install ready: false, restart count 0 May 20 22:11:57.691: INFO: collectd-h4pzk started at 2022-05-20 20:21:17 +0000 UTC (0+3 container statuses recorded) May 20 22:11:57.691: INFO: Container collectd ready: true, restart count 0 May 20 22:11:57.691: INFO: Container collectd-exporter ready: true, restart count 0 May 20 22:11:57.691: INFO: Container rbac-proxy ready: true, restart count 0 May 20 22:11:58.768: INFO: Latency metrics for node node2 May 20 22:11:58.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5852" for this suite. • Failure [322.110 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 Should recreate evicted statefulset [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 20 22:11:46.721: Pod ss-0 expected to be re-created at least once /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:11:51.665: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 20 22:11:51.712: INFO: The status of Pod busybox-scheduling-457eae15-9863-4e73-a83e-1c18f9204485 is Pending, waiting for it to be Running (with Ready = true) May 20 22:11:53.715: INFO: The status of Pod busybox-scheduling-457eae15-9863-4e73-a83e-1c18f9204485 is Pending, waiting for it to be Running (with Ready = true) May 20 22:11:55.716: INFO: The status of Pod busybox-scheduling-457eae15-9863-4e73-a83e-1c18f9204485 is Pending, waiting for it to be Running (with Ready = true) May 20 22:11:57.715: INFO: The status of Pod busybox-scheduling-457eae15-9863-4e73-a83e-1c18f9204485 is Pending, waiting for it to be Running (with Ready = true) May 20 22:11:59.718: INFO: The status of Pod busybox-scheduling-457eae15-9863-4e73-a83e-1c18f9204485 is Running (Ready = true) [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:11:59.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1497" for this suite. • [SLOW TEST:8.070 seconds] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when scheduling a busybox command in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:41 should print the output to logs [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":213,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:11:57.731: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 20 22:11:58.079: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 20 22:12:00.090: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681518, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681518, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681518, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681518, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 20 22:12:03.101: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:12:03.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3211" for this suite. STEP: Destroying namespace "webhook-3211-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.502 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:11:57.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin May 20 22:11:58.007: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fc8a4fda-83bd-4d86-aa64-23f3f14ded89" in namespace "projected-2082" to be "Succeeded or Failed" May 20 22:11:58.010: INFO: Pod "downwardapi-volume-fc8a4fda-83bd-4d86-aa64-23f3f14ded89": Phase="Pending", Reason="", readiness=false. Elapsed: 2.575011ms May 20 22:12:00.014: INFO: Pod "downwardapi-volume-fc8a4fda-83bd-4d86-aa64-23f3f14ded89": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00633403s May 20 22:12:02.017: INFO: Pod "downwardapi-volume-fc8a4fda-83bd-4d86-aa64-23f3f14ded89": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009941263s May 20 22:12:04.021: INFO: Pod "downwardapi-volume-fc8a4fda-83bd-4d86-aa64-23f3f14ded89": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013902701s STEP: Saw pod success May 20 22:12:04.021: INFO: Pod "downwardapi-volume-fc8a4fda-83bd-4d86-aa64-23f3f14ded89" satisfied condition "Succeeded or Failed" May 20 22:12:04.024: INFO: Trying to get logs from node node2 pod downwardapi-volume-fc8a4fda-83bd-4d86-aa64-23f3f14ded89 container client-container: STEP: delete the pod May 20 22:12:04.036: INFO: Waiting for pod downwardapi-volume-fc8a4fda-83bd-4d86-aa64-23f3f14ded89 to disappear May 20 22:12:04.040: INFO: Pod downwardapi-volume-fc8a4fda-83bd-4d86-aa64-23f3f14ded89 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:12:04.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2082" for this suite. • [SLOW TEST:6.075 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":42,"skipped":633,"failed":0} SSSSSSSS ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":31,"skipped":526,"failed":0} [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:09:29.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod liveness-28498808-55ef-4e2b-acf0-d537b9fa3028 in namespace container-probe-4872 May 20 22:09:33.810: INFO: Started pod liveness-28498808-55ef-4e2b-acf0-d537b9fa3028 in namespace container-probe-4872 STEP: checking the pod's current state and verifying that restartCount is present May 20 22:09:33.813: INFO: Initial restart count of pod liveness-28498808-55ef-4e2b-acf0-d537b9fa3028 is 0 May 20 22:09:53.864: INFO: Restart count of pod container-probe-4872/liveness-28498808-55ef-4e2b-acf0-d537b9fa3028 is now 1 (20.050527362s elapsed) May 20 22:10:11.904: INFO: Restart count of pod container-probe-4872/liveness-28498808-55ef-4e2b-acf0-d537b9fa3028 is now 2 (38.091018273s elapsed) May 20 22:10:31.941: INFO: Restart count of pod container-probe-4872/liveness-28498808-55ef-4e2b-acf0-d537b9fa3028 is now 3 (58.128294914s elapsed) May 20 22:10:51.988: INFO: Restart count of pod container-probe-4872/liveness-28498808-55ef-4e2b-acf0-d537b9fa3028 is now 4 (1m18.174760678s elapsed) May 20 22:12:04.143: INFO: Restart count of pod container-probe-4872/liveness-28498808-55ef-4e2b-acf0-d537b9fa3028 is now 5 (2m30.330247787s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:12:04.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4872" for this suite. • [SLOW TEST:154.388 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":526,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:11:48.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 20 22:11:48.977: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties May 20 22:11:57.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1826 --namespace=crd-publish-openapi-1826 create -f -' May 20 22:11:58.088: INFO: stderr: "" May 20 22:11:58.088: INFO: stdout: "e2e-test-crd-publish-openapi-8420-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 20 22:11:58.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1826 --namespace=crd-publish-openapi-1826 delete e2e-test-crd-publish-openapi-8420-crds test-foo' May 20 22:11:58.269: INFO: stderr: "" May 20 22:11:58.269: INFO: stdout: "e2e-test-crd-publish-openapi-8420-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" May 20 22:11:58.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1826 --namespace=crd-publish-openapi-1826 apply -f -' May 20 22:11:58.638: INFO: stderr: "" May 20 22:11:58.638: INFO: stdout: "e2e-test-crd-publish-openapi-8420-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 20 22:11:58.639: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1826 --namespace=crd-publish-openapi-1826 delete e2e-test-crd-publish-openapi-8420-crds test-foo' May 20 22:11:58.822: INFO: stderr: "" May 20 22:11:58.822: INFO: stdout: "e2e-test-crd-publish-openapi-8420-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema May 20 22:11:58.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1826 --namespace=crd-publish-openapi-1826 create -f -' May 20 22:11:59.182: INFO: rc: 1 May 20 22:11:59.182: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1826 --namespace=crd-publish-openapi-1826 apply -f -' May 20 22:11:59.485: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties May 20 22:11:59.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1826 --namespace=crd-publish-openapi-1826 create -f -' May 20 22:11:59.808: INFO: rc: 1 May 20 22:11:59.808: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1826 --namespace=crd-publish-openapi-1826 apply -f -' May 20 22:12:00.112: INFO: rc: 1 STEP: kubectl explain works to explain CR properties May 20 22:12:00.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1826 explain e2e-test-crd-publish-openapi-8420-crds' May 20 22:12:00.458: INFO: stderr: "" May 20 22:12:00.458: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8420-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively May 20 22:12:00.459: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1826 explain e2e-test-crd-publish-openapi-8420-crds.metadata' May 20 22:12:00.811: INFO: stderr: "" May 20 22:12:00.811: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8420-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n NOT return a 409 - instead, it will either return 201 Created or 500 with\n Reason ServerTimeout indicating a unique name could not be found in the\n time allotted, and the client should retry (optionally after the time\n indicated in the Retry-After header).\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only.\n\n DEPRECATED Kubernetes will stop propagating this field in 1.20 release and\n the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" May 20 22:12:00.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1826 explain e2e-test-crd-publish-openapi-8420-crds.spec' May 20 22:12:01.193: INFO: stderr: "" May 20 22:12:01.193: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8420-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" May 20 22:12:01.193: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1826 explain e2e-test-crd-publish-openapi-8420-crds.spec.bars' May 20 22:12:01.546: INFO: stderr: "" May 20 22:12:01.546: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8420-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist May 20 22:12:01.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1826 explain e2e-test-crd-publish-openapi-8420-crds.spec.bars2' May 20 22:12:01.912: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:12:05.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1826" for this suite. • [SLOW TEST:16.652 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":-1,"completed":25,"skipped":482,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":-1,"completed":44,"skipped":726,"failed":0} [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:12:03.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-14644f5f-f793-49af-9c37-cd3160f7b4d3 STEP: Creating a pod to test consume configMaps May 20 22:12:03.271: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c60257a0-e08e-4174-8c47-a3aab2247940" in namespace "projected-5314" to be "Succeeded or Failed" May 20 22:12:03.273: INFO: Pod "pod-projected-configmaps-c60257a0-e08e-4174-8c47-a3aab2247940": Phase="Pending", Reason="", readiness=false. Elapsed: 1.832751ms May 20 22:12:05.277: INFO: Pod "pod-projected-configmaps-c60257a0-e08e-4174-8c47-a3aab2247940": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005649483s May 20 22:12:07.281: INFO: Pod "pod-projected-configmaps-c60257a0-e08e-4174-8c47-a3aab2247940": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009838094s STEP: Saw pod success May 20 22:12:07.281: INFO: Pod "pod-projected-configmaps-c60257a0-e08e-4174-8c47-a3aab2247940" satisfied condition "Succeeded or Failed" May 20 22:12:07.285: INFO: Trying to get logs from node node1 pod pod-projected-configmaps-c60257a0-e08e-4174-8c47-a3aab2247940 container agnhost-container: STEP: delete the pod May 20 22:12:07.401: INFO: Waiting for pod pod-projected-configmaps-c60257a0-e08e-4174-8c47-a3aab2247940 to disappear May 20 22:12:07.404: INFO: Pod pod-projected-configmaps-c60257a0-e08e-4174-8c47-a3aab2247940 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:12:07.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5314" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":45,"skipped":726,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:11:59.781: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-7885.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-7885.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7885.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-7885.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-7885.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7885.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 20 22:12:07.850: INFO: DNS probes using dns-7885/dns-test-24524f76-d7c2-47d8-b11e-c1093454e8dd succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:12:07.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7885" for this suite. • [SLOW TEST:8.095 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":-1,"completed":15,"skipped":234,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:12:07.905: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W0520 22:12:07.931712 23 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should support CronJob API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a cronjob STEP: creating STEP: getting STEP: listing STEP: watching May 20 22:12:07.938: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching May 20 22:12:07.941: INFO: starting watch STEP: patching STEP: updating May 20 22:12:07.955: INFO: waiting for watch events with expected annotations May 20 22:12:07.955: INFO: saw patched and updated annotations STEP: patching /status STEP: updating /status STEP: get /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:12:07.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-399" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] CronJob should support CronJob API operations [Conformance]","total":-1,"completed":16,"skipped":253,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:12:04.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap configmap-5519/configmap-test-67ce18a9-2c5d-47ad-9355-18b48f38f01d STEP: Creating a pod to test consume configMaps May 20 22:12:04.100: INFO: Waiting up to 5m0s for pod "pod-configmaps-8746a1d8-6153-4c6f-a476-7656ad87dc5c" in namespace "configmap-5519" to be "Succeeded or Failed" May 20 22:12:04.103: INFO: Pod "pod-configmaps-8746a1d8-6153-4c6f-a476-7656ad87dc5c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.239353ms May 20 22:12:06.106: INFO: Pod "pod-configmaps-8746a1d8-6153-4c6f-a476-7656ad87dc5c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005481861s May 20 22:12:08.110: INFO: Pod "pod-configmaps-8746a1d8-6153-4c6f-a476-7656ad87dc5c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009468509s May 20 22:12:10.114: INFO: Pod "pod-configmaps-8746a1d8-6153-4c6f-a476-7656ad87dc5c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014020759s STEP: Saw pod success May 20 22:12:10.114: INFO: Pod "pod-configmaps-8746a1d8-6153-4c6f-a476-7656ad87dc5c" satisfied condition "Succeeded or Failed" May 20 22:12:10.117: INFO: Trying to get logs from node node2 pod pod-configmaps-8746a1d8-6153-4c6f-a476-7656ad87dc5c container env-test: STEP: delete the pod May 20 22:12:10.129: INFO: Waiting for pod pod-configmaps-8746a1d8-6153-4c6f-a476-7656ad87dc5c to disappear May 20 22:12:10.132: INFO: Pod pod-configmaps-8746a1d8-6153-4c6f-a476-7656ad87dc5c no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:12:10.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5519" for this suite. • [SLOW TEST:6.075 seconds] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":43,"skipped":641,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:12:10.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should run through a ConfigMap lifecycle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a ConfigMap STEP: fetching the ConfigMap STEP: patching the ConfigMap STEP: listing all ConfigMaps in all namespaces with a label selector STEP: deleting the ConfigMap by collection with a label selector STEP: listing all ConfigMaps in test namespace [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:12:10.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-881" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":-1,"completed":44,"skipped":647,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":585,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:11:40.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Performing setup for networking test in namespace pod-network-test-6382 STEP: creating a selector STEP: Creating the service pods in kubernetes May 20 22:11:40.080: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 20 22:11:40.110: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 20 22:11:42.113: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 20 22:11:44.115: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 22:11:46.113: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 22:11:48.114: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 22:11:50.114: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 22:11:52.114: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 22:11:54.114: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 22:11:56.113: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 22:11:58.115: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 22:12:00.113: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 22:12:02.113: INFO: The status of Pod netserver-0 is Running (Ready = true) May 20 22:12:02.117: INFO: The status of Pod netserver-1 is Running (Ready = false) May 20 22:12:04.121: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 20 22:12:10.141: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 May 20 22:12:10.141: INFO: Breadth first check of 10.244.4.71 on host 10.10.190.207... May 20 22:12:10.144: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.152:9080/dial?request=hostname&protocol=udp&host=10.244.4.71&port=8081&tries=1'] Namespace:pod-network-test-6382 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 22:12:10.144: INFO: >>> kubeConfig: /root/.kube/config May 20 22:12:10.446: INFO: Waiting for responses: map[] May 20 22:12:10.446: INFO: reached 10.244.4.71 after 0/1 tries May 20 22:12:10.446: INFO: Breadth first check of 10.244.3.141 on host 10.10.190.208... May 20 22:12:10.448: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.152:9080/dial?request=hostname&protocol=udp&host=10.244.3.141&port=8081&tries=1'] Namespace:pod-network-test-6382 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 22:12:10.448: INFO: >>> kubeConfig: /root/.kube/config May 20 22:12:10.536: INFO: Waiting for responses: map[] May 20 22:12:10.536: INFO: reached 10.244.3.141 after 0/1 tries May 20 22:12:10.536: INFO: Going to retry 0 out of 2 pods.... [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:12:10.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6382" for this suite. • [SLOW TEST:30.490 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":-1,"completed":33,"skipped":585,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} S ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:12:07.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 20 22:12:07.921: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 20 22:12:09.931: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681527, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681527, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681527, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681527, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 20 22:12:11.937: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681527, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681527, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681527, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681527, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 20 22:12:14.945: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:12:14.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4701" for this suite. STEP: Destroying namespace "webhook-4701-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.454 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":-1,"completed":46,"skipped":789,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:12:05.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod May 20 22:12:05.780: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:12:15.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3203" for this suite. • [SLOW TEST:10.000 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":-1,"completed":26,"skipped":564,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:12:08.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-map-f4422ba8-b306-43e9-a0eb-8c7e23e00f60 STEP: Creating a pod to test consume configMaps May 20 22:12:08.063: INFO: Waiting up to 5m0s for pod "pod-configmaps-9ffb837c-37ff-4327-90f7-643d89d19529" in namespace "configmap-1943" to be "Succeeded or Failed" May 20 22:12:08.065: INFO: Pod "pod-configmaps-9ffb837c-37ff-4327-90f7-643d89d19529": Phase="Pending", Reason="", readiness=false. Elapsed: 2.296194ms May 20 22:12:10.069: INFO: Pod "pod-configmaps-9ffb837c-37ff-4327-90f7-643d89d19529": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006006963s May 20 22:12:12.072: INFO: Pod "pod-configmaps-9ffb837c-37ff-4327-90f7-643d89d19529": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009127032s May 20 22:12:14.078: INFO: Pod "pod-configmaps-9ffb837c-37ff-4327-90f7-643d89d19529": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015043872s May 20 22:12:16.081: INFO: Pod "pod-configmaps-9ffb837c-37ff-4327-90f7-643d89d19529": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.018130886s STEP: Saw pod success May 20 22:12:16.081: INFO: Pod "pod-configmaps-9ffb837c-37ff-4327-90f7-643d89d19529" satisfied condition "Succeeded or Failed" May 20 22:12:16.084: INFO: Trying to get logs from node node1 pod pod-configmaps-9ffb837c-37ff-4327-90f7-643d89d19529 container agnhost-container: STEP: delete the pod May 20 22:12:16.098: INFO: Waiting for pod pod-configmaps-9ffb837c-37ff-4327-90f7-643d89d19529 to disappear May 20 22:12:16.100: INFO: Pod pod-configmaps-9ffb837c-37ff-4327-90f7-643d89d19529 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:12:16.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1943" for this suite. • [SLOW TEST:8.080 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":276,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:12:15.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslicemirroring STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslicemirroring.go:39 [It] should mirror a custom Endpoints resource through create update and delete [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: mirroring a new custom Endpoint STEP: mirroring an update to a custom Endpoint May 20 22:12:15.043: INFO: Expected EndpointSlice to have 10.2.3.4 as address, got 10.1.2.3 STEP: mirroring deletion of a custom Endpoint May 20 22:12:17.053: INFO: Waiting for 0 EndpointSlices to exist, got 1 [AfterEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:12:19.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslicemirroring-402" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","total":-1,"completed":47,"skipped":796,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ May 20 22:12:19.133: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:12:10.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics May 20 22:12:20.402: INFO: The status of Pod kube-controller-manager-master3 is Running (Ready = true) May 20 22:12:20.462: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: May 20 22:12:20.462: INFO: Deleting pod "simpletest-rc-to-be-deleted-5nqvz" in namespace "gc-5050" May 20 22:12:20.471: INFO: Deleting pod "simpletest-rc-to-be-deleted-5wgks" in namespace "gc-5050" May 20 22:12:20.477: INFO: Deleting pod "simpletest-rc-to-be-deleted-5x5lh" in namespace "gc-5050" May 20 22:12:20.483: INFO: Deleting pod "simpletest-rc-to-be-deleted-66lrd" in namespace "gc-5050" May 20 22:12:20.489: INFO: Deleting pod "simpletest-rc-to-be-deleted-95rlr" in namespace "gc-5050" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:12:20.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5050" for this suite. • [SLOW TEST:10.212 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:12:10.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test env composition May 20 22:12:10.584: INFO: Waiting up to 5m0s for pod "var-expansion-7810eb32-df39-4204-9d59-2c22ceb783ed" in namespace "var-expansion-7916" to be "Succeeded or Failed" May 20 22:12:10.587: INFO: Pod "var-expansion-7810eb32-df39-4204-9d59-2c22ceb783ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.208321ms May 20 22:12:12.590: INFO: Pod "var-expansion-7810eb32-df39-4204-9d59-2c22ceb783ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005226939s May 20 22:12:14.593: INFO: Pod "var-expansion-7810eb32-df39-4204-9d59-2c22ceb783ed": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008988645s May 20 22:12:16.597: INFO: Pod "var-expansion-7810eb32-df39-4204-9d59-2c22ceb783ed": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012417985s May 20 22:12:18.600: INFO: Pod "var-expansion-7810eb32-df39-4204-9d59-2c22ceb783ed": Phase="Pending", Reason="", readiness=false. Elapsed: 8.015368524s May 20 22:12:20.603: INFO: Pod "var-expansion-7810eb32-df39-4204-9d59-2c22ceb783ed": Phase="Pending", Reason="", readiness=false. Elapsed: 10.018852121s May 20 22:12:22.606: INFO: Pod "var-expansion-7810eb32-df39-4204-9d59-2c22ceb783ed": Phase="Pending", Reason="", readiness=false. Elapsed: 12.021683411s May 20 22:12:24.613: INFO: Pod "var-expansion-7810eb32-df39-4204-9d59-2c22ceb783ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.028423824s STEP: Saw pod success May 20 22:12:24.613: INFO: Pod "var-expansion-7810eb32-df39-4204-9d59-2c22ceb783ed" satisfied condition "Succeeded or Failed" May 20 22:12:24.615: INFO: Trying to get logs from node node2 pod var-expansion-7810eb32-df39-4204-9d59-2c22ceb783ed container dapi-container: STEP: delete the pod May 20 22:12:24.628: INFO: Waiting for pod var-expansion-7810eb32-df39-4204-9d59-2c22ceb783ed to disappear May 20 22:12:24.630: INFO: Pod var-expansion-7810eb32-df39-4204-9d59-2c22ceb783ed no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:12:24.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7916" for this suite. • [SLOW TEST:14.088 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":-1,"completed":34,"skipped":586,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} May 20 22:12:24.640: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:12:15.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 20 22:12:23.878: INFO: &Pod{ObjectMeta:{send-events-b340e15d-8273-4729-9411-77202180fdd3 events-398 48cd2407-8006-4795-b5fa-313b118f9975 50137 0 2022-05-20 22:12:15 +0000 UTC map[name:foo time:850989707] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.82" ], "mac": "c2:8d:03:79:ea:68", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.82" ], "mac": "c2:8d:03:79:ea:68", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [] [] [{e2e.test Update v1 2022-05-20 22:12:15 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-05-20 22:12:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-05-20 22:12:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.82\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-9jjlw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:p,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9jjlw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:12:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:12:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:12:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-20 22:12:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.4.82,StartTime:2022-05-20 22:12:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-05-20 22:12:21 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:docker://3b4040205699e58801a83d327d68309d9f74d0397fbb3f6e6d8ea1b9c054e8e5,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.82,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod May 20 22:12:25.882: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 20 22:12:27.886: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:12:27.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-398" for this suite. • [SLOW TEST:12.070 seconds] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":-1,"completed":27,"skipped":607,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} May 20 22:12:27.902: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:12:16.128: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 20 22:12:16.569: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 20 22:12:18.578: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681536, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681536, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681536, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681536, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 20 22:12:20.583: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681536, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681536, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681536, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681536, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 20 22:12:22.582: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681536, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681536, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681536, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788681536, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 20 22:12:25.591: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 20 22:12:25.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-9146-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:12:33.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3514" for this suite. STEP: Destroying namespace "webhook-3514-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:17.585 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":-1,"completed":18,"skipped":286,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} May 20 22:12:33.715: INFO: Running AfterSuite actions on all nodes {"msg":"FAILED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":-1,"completed":27,"skipped":417,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:11:58.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-6964 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating stateful set ss in namespace statefulset-6964 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6964 May 20 22:11:58.816: INFO: Found 0 stateful pods, waiting for 1 May 20 22:12:08.820: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 20 22:12:08.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-6964 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 20 22:12:09.149: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" May 20 22:12:09.149: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 20 22:12:09.149: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 20 22:12:09.152: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 20 22:12:19.157: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 20 22:12:19.157: INFO: Waiting for statefulset status.replicas updated to 0 May 20 22:12:19.170: INFO: POD NODE PHASE GRACE CONDITIONS May 20 22:12:19.170: INFO: ss-0 node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:11:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:12:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:12:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:11:58 +0000 UTC }] May 20 22:12:19.170: INFO: May 20 22:12:19.170: INFO: StatefulSet ss has not reached scale 3, at 1 May 20 22:12:20.174: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.996583607s May 20 22:12:21.179: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.992502503s May 20 22:12:22.182: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.987878379s May 20 22:12:23.186: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.984079712s May 20 22:12:24.190: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.980797921s May 20 22:12:25.194: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.977263703s May 20 22:12:26.197: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.973086741s May 20 22:12:27.201: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.970036832s May 20 22:12:28.204: INFO: Verifying statefulset ss doesn't scale past 3 for another 966.588644ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6964 May 20 22:12:29.209: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-6964 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 20 22:12:29.485: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" May 20 22:12:29.485: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 20 22:12:29.485: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 20 22:12:29.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-6964 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 20 22:12:29.740: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" May 20 22:12:29.740: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 20 22:12:29.740: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 20 22:12:29.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-6964 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 20 22:12:29.995: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" May 20 22:12:29.995: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 20 22:12:29.995: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 20 22:12:29.998: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 20 22:12:29.998: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 20 22:12:29.998: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 20 22:12:30.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-6964 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 20 22:12:30.276: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" May 20 22:12:30.276: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 20 22:12:30.276: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 20 22:12:30.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-6964 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 20 22:12:30.526: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" May 20 22:12:30.526: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 20 22:12:30.526: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 20 22:12:30.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-6964 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 20 22:12:30.775: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" May 20 22:12:30.775: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 20 22:12:30.775: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 20 22:12:30.775: INFO: Waiting for statefulset status.replicas updated to 0 May 20 22:12:30.777: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 May 20 22:12:40.783: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 20 22:12:40.783: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 20 22:12:40.784: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 20 22:12:40.792: INFO: POD NODE PHASE GRACE CONDITIONS May 20 22:12:40.792: INFO: ss-0 node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:11:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:12:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:12:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:11:58 +0000 UTC }] May 20 22:12:40.793: INFO: ss-1 node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:12:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:12:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:12:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:12:19 +0000 UTC }] May 20 22:12:40.793: INFO: ss-2 node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:12:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:12:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:12:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:12:19 +0000 UTC }] May 20 22:12:40.793: INFO: May 20 22:12:40.793: INFO: StatefulSet ss has not reached scale 0, at 3 May 20 22:12:41.798: INFO: POD NODE PHASE GRACE CONDITIONS May 20 22:12:41.798: INFO: ss-0 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:11:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:12:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:12:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:11:58 +0000 UTC }] May 20 22:12:41.798: INFO: ss-1 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:12:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:12:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:12:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:12:19 +0000 UTC }] May 20 22:12:41.798: INFO: ss-2 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:12:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:12:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:12:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:12:19 +0000 UTC }] May 20 22:12:41.798: INFO: May 20 22:12:41.798: INFO: StatefulSet ss has not reached scale 0, at 3 May 20 22:12:42.802: INFO: POD NODE PHASE GRACE CONDITIONS May 20 22:12:42.802: INFO: ss-0 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:11:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:12:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:12:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:11:58 +0000 UTC }] May 20 22:12:42.802: INFO: ss-1 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:12:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:12:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:12:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:12:19 +0000 UTC }] May 20 22:12:42.802: INFO: ss-2 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:12:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:12:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:12:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:12:19 +0000 UTC }] May 20 22:12:42.802: INFO: May 20 22:12:42.802: INFO: StatefulSet ss has not reached scale 0, at 3 May 20 22:12:43.815: INFO: POD NODE PHASE GRACE CONDITIONS May 20 22:12:43.815: INFO: ss-0 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:11:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:12:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:12:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:11:58 +0000 UTC }] May 20 22:12:43.815: INFO: ss-1 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:12:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:12:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:12:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:12:19 +0000 UTC }] May 20 22:12:43.815: INFO: ss-2 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:12:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:12:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:12:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:12:19 +0000 UTC }] May 20 22:12:43.815: INFO: May 20 22:12:43.815: INFO: StatefulSet ss has not reached scale 0, at 3 May 20 22:12:44.821: INFO: POD NODE PHASE GRACE CONDITIONS May 20 22:12:44.821: INFO: ss-0 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:11:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:12:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:12:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:11:58 +0000 UTC }] May 20 22:12:44.821: INFO: ss-1 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:12:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:12:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:12:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:12:19 +0000 UTC }] May 20 22:12:44.821: INFO: ss-2 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:12:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:12:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:12:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:12:19 +0000 UTC }] May 20 22:12:44.821: INFO: May 20 22:12:44.821: INFO: StatefulSet ss has not reached scale 0, at 3 May 20 22:12:45.826: INFO: POD NODE PHASE GRACE CONDITIONS May 20 22:12:45.826: INFO: ss-0 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:11:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:12:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:12:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:11:58 +0000 UTC }] May 20 22:12:45.827: INFO: ss-1 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:12:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:12:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:12:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:12:19 +0000 UTC }] May 20 22:12:45.827: INFO: ss-2 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:12:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:12:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:12:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:12:19 +0000 UTC }] May 20 22:12:45.827: INFO: May 20 22:12:45.827: INFO: StatefulSet ss has not reached scale 0, at 3 May 20 22:12:46.831: INFO: POD NODE PHASE GRACE CONDITIONS May 20 22:12:46.831: INFO: ss-2 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:12:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:12:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:12:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 22:12:19 +0000 UTC }] May 20 22:12:46.832: INFO: May 20 22:12:46.832: INFO: StatefulSet ss has not reached scale 0, at 1 May 20 22:12:47.836: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.95716093s May 20 22:12:48.842: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.951539806s May 20 22:12:49.848: INFO: Verifying statefulset ss doesn't scale past 0 for another 946.020299ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6964 May 20 22:12:50.851: INFO: Scaling statefulset ss to 0 May 20 22:12:50.862: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 May 20 22:12:50.864: INFO: Deleting all statefulset in ns statefulset-6964 May 20 22:12:50.867: INFO: Scaling statefulset ss to 0 May 20 22:12:50.875: INFO: Waiting for statefulset status.replicas updated to 0 May 20 22:12:50.877: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:12:50.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6964" for this suite. • [SLOW TEST:52.119 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":-1,"completed":28,"skipped":417,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} May 20 22:12:50.904: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:08:45.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod test-webserver-78e24097-06d9-4a09-92f5-649892c8b93d in namespace container-probe-9617 May 20 22:08:51.364: INFO: Started pod test-webserver-78e24097-06d9-4a09-92f5-649892c8b93d in namespace container-probe-9617 STEP: checking the pod's current state and verifying that restartCount is present May 20 22:08:51.366: INFO: Initial restart count of pod test-webserver-78e24097-06d9-4a09-92f5-649892c8b93d is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:12:51.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9617" for this suite. • [SLOW TEST:246.568 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":490,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} May 20 22:12:51.895: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:11:09.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-876 [It] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a new StatefulSet May 20 22:11:09.816: INFO: Found 0 stateful pods, waiting for 3 May 20 22:11:19.822: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 20 22:11:19.822: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 20 22:11:19.822: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 20 22:11:29.822: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 20 22:11:29.822: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 20 22:11:29.822: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 20 22:11:29.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-876 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 20 22:11:30.086: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" May 20 22:11:30.086: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 20 22:11:30.086: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-1 May 20 22:11:40.116: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 20 22:11:50.133: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-876 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 20 22:11:50.386: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" May 20 22:11:50.386: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 20 22:11:50.386: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 20 22:12:00.407: INFO: Waiting for StatefulSet statefulset-876/ss2 to complete update May 20 22:12:00.407: INFO: Waiting for Pod statefulset-876/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 May 20 22:12:00.407: INFO: Waiting for Pod statefulset-876/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 May 20 22:12:10.417: INFO: Waiting for StatefulSet statefulset-876/ss2 to complete update May 20 22:12:10.417: INFO: Waiting for Pod statefulset-876/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 May 20 22:12:20.416: INFO: Waiting for StatefulSet statefulset-876/ss2 to complete update May 20 22:12:20.416: INFO: Waiting for Pod statefulset-876/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 STEP: Rolling back to a previous revision May 20 22:12:30.419: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-876 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 20 22:12:30.662: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" May 20 22:12:30.662: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 20 22:12:30.662: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 20 22:12:40.696: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 20 22:12:50.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-876 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 20 22:12:50.988: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" May 20 22:12:50.988: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 20 22:12:50.988: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 20 22:13:01.007: INFO: Waiting for StatefulSet statefulset-876/ss2 to complete update May 20 22:13:01.008: INFO: Waiting for Pod statefulset-876/ss2-0 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 May 20 22:13:11.013: INFO: Deleting all statefulset in ns statefulset-876 May 20 22:13:11.015: INFO: Scaling statefulset ss2 to 0 May 20 22:13:41.028: INFO: Waiting for statefulset status.replicas updated to 0 May 20 22:13:41.030: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:13:41.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-876" for this suite. • [SLOW TEST:151.265 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":-1,"completed":12,"skipped":140,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} May 20 22:13:41.053: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:12:04.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W0520 22:12:04.209122 24 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should replace jobs when ReplaceConcurrent [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ReplaceConcurrent cronjob STEP: Ensuring a job is scheduled STEP: Ensuring exactly one is scheduled STEP: Ensuring exactly one running job exists by listing jobs explicitly STEP: Ensuring the job is replaced with a new one STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:14:00.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-2991" for this suite. • [SLOW TEST:116.063 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should replace jobs when ReplaceConcurrent [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","total":-1,"completed":33,"skipped":538,"failed":0} May 20 22:14:00.250: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:11:22.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod liveness-84260980-5b9b-4ca1-ad66-c01371d43ddb in namespace container-probe-7906 May 20 22:11:26.737: INFO: Started pod liveness-84260980-5b9b-4ca1-ad66-c01371d43ddb in namespace container-probe-7906 STEP: checking the pod's current state and verifying that restartCount is present May 20 22:11:26.740: INFO: Initial restart count of pod liveness-84260980-5b9b-4ca1-ad66-c01371d43ddb is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:15:27.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7906" for this suite. • [SLOW TEST:244.663 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":-1,"completed":47,"skipped":885,"failed":0} May 20 22:15:27.362: INFO: Running AfterSuite actions on all nodes {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":-1,"completed":45,"skipped":677,"failed":0} May 20 22:12:20.511: INFO: Running AfterSuite actions on all nodes May 20 22:15:27.429: INFO: Running AfterSuite actions on node 1 May 20 22:15:27.429: INFO: Skipping dumping logs from cluster Summarizing 6 Failures: [Fail] [sig-network] Services [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2497 [Fail] [sig-network] Services [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1351 [Fail] [sig-network] Services [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2576 [Fail] [sig-network] Services [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2576 [Fail] [sig-network] Services [It] should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1169 [Fail] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] [It] Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 Ran 320 of 5773 Specs in 810.039 seconds FAIL! -- 314 Passed | 6 Failed | 0 Pending | 5453 Skipped Ginkgo ran 1 suite in 13m31.678077755s Test Suite Failed